Saddle-to-Saddle Dynamics in Diagonal Linear Networks
In this paper we fully describe the trajectory of gradient flow over diagonal linear networks in the limit of vanishing initialisation. We show that the limiting flow successively jumps from a saddle of the training loss to another until reaching the minimum $\ell_1$-norm solution. This saddle-to-sa...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper we fully describe the trajectory of gradient flow over diagonal
linear networks in the limit of vanishing initialisation. We show that the
limiting flow successively jumps from a saddle of the training loss to another
until reaching the minimum $\ell_1$-norm solution. This saddle-to-saddle
dynamics translates to an incremental learning process as each saddle
corresponds to the minimiser of the loss constrained to an active set outside
of which the coordinates must be zero. We explicitly characterise the visited
saddles as well as the jumping times through a recursive algorithm reminiscent
of the LARS algorithm used for computing the Lasso path. Our proof leverages a
convenient arc-length time-reparametrisation which enables to keep track of the
heteroclinic transitions between the jumps. Our analysis requires negligible
assumptions on the data, applies to both under and overparametrised settings
and covers complex cases where there is no monotonicity of the number of active
coordinates. We provide numerical experiments to support our findings. |
---|---|
DOI: | 10.48550/arxiv.2304.00488 |