AdaScale SGD: A User-Friendly Algorithm for Distributed Training
When using large-batch training to speed up stochastic gradient descent, learning rates must adapt to new batch sizes in order to maximize speed-ups and preserve model quality. Re-tuning learning rates is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | When using large-batch training to speed up stochastic gradient descent,
learning rates must adapt to new batch sizes in order to maximize speed-ups and
preserve model quality. Re-tuning learning rates is resource intensive, while
fixed scaling rules often degrade model quality. We propose AdaScale SGD, an
algorithm that reliably adapts learning rates to large-batch training. By
continually adapting to the gradient's variance, AdaScale automatically
achieves speed-ups for a wide range of batch sizes. We formally describe this
quality with AdaScale's convergence bound, which maintains final objective
values, even as batch sizes grow large and the number of iterations decreases.
In empirical comparisons, AdaScale trains well beyond the batch size limits of
popular "linear learning rate scaling" rules. This includes large-batch
training with no model degradation for machine translation, image
classification, object detection, and speech recognition tasks. AdaScale's
qualitative behavior is similar to that of "warm-up" heuristics, but unlike
warm-up, this behavior emerges naturally from a principled mechanism. The
algorithm introduces negligible computational overhead and no new
hyperparameters, making AdaScale an attractive choice for large-scale training
in practice. |
---|---|
DOI: | 10.48550/arxiv.2007.05105 |