Random Scaling and Momentum for Non-smooth Non-convex Optimization
Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training neural networks requires optimizing a loss function that may be
highly irregular, and in particular neither convex nor smooth. Popular training
algorithms are based on stochastic gradient descent with momentum (SGDM), for
which classical analysis applies only if the loss is either convex or smooth.
We show that a very small modification to SGDM closes this gap: simply scale
the update at each time point by an exponentially distributed random scalar.
The resulting algorithm achieves optimal convergence guarantees. Intriguingly,
this result is not derived by a specific analysis of SGDM: instead, it falls
naturally out of a more general framework for converting online convex
optimization algorithms to non-convex optimization algorithms. |
---|---|
DOI: | 10.48550/arxiv.2405.09742 |