Scalable Second Order Optimization for Deep Learning
Optimization in machine learning, both theoretical and applied, is presently dominated by first-order gradient methods such as stochastic gradient descent. Second-order optimization methods, that involve second derivatives and/or second order statistics of the data, are far less prevalent despite st...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Optimization in machine learning, both theoretical and applied, is presently
dominated by first-order gradient methods such as stochastic gradient descent.
Second-order optimization methods, that involve second derivatives and/or
second order statistics of the data, are far less prevalent despite strong
theoretical properties, due to their prohibitive computation, memory and
communication costs. In an attempt to bridge this gap between theoretical and
practical optimization, we present a scalable implementation of a second-order
preconditioned method (concretely, a variant of full-matrix Adagrad), that
along with several critical algorithmic and numerical improvements, provides
significant convergence and wall-clock time improvements compared to
conventional first-order methods on state-of-the-art deep models. Our novel
design effectively utilizes the prevalent heterogeneous hardware architecture
for training deep models, consisting of a multicore CPU coupled with multiple
accelerator units. We demonstrate superior performance compared to
state-of-the-art on very large learning tasks such as machine translation with
Transformers, language modeling with BERT, click-through rate prediction on
Criteo, and image classification on ImageNet with ResNet-50. |
---|---|
DOI: | 10.48550/arxiv.2002.09018 |