Special Properties of Gradient Descent with Large Learning Rates
When training neural networks, it has been widely observed that a large step size is essential in stochastic gradient descent (SGD) for obtaining superior models. However, the effect of large step sizes on the success of SGD is not well understood theoretically. Several previous works have attribute...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | When training neural networks, it has been widely observed that a large step
size is essential in stochastic gradient descent (SGD) for obtaining superior
models. However, the effect of large step sizes on the success of SGD is not
well understood theoretically. Several previous works have attributed this
success to the stochastic noise present in SGD. However, we show through a
novel set of experiments that the stochastic noise is not sufficient to explain
good non-convex training, and that instead the effect of a large learning rate
itself is essential for obtaining best performance.We demonstrate the same
effects also in the noise-less case, i.e. for full-batch GD. We formally prove
that GD with large step size -- on certain non-convex function classes --
follows a different trajectory than GD with a small step size, which can lead
to convergence to a global minimum instead of a local one. Our settings provide
a framework for future analysis which allows comparing algorithms based on
behaviors that can not be observed in the traditional settings. |
---|---|
DOI: | 10.48550/arxiv.2205.15142 |