Stochastic Gradient Methods with Block Diagonal Matrix Adaptation
Adaptive gradient approaches that automatically adjust the learning rate on a per-feature basis have been very popular for training deep networks. This rich class of algorithms includes Adagrad, RMSprop, Adam, and recent extensions. All these algorithms have adopted diagonal matrix adaptation, due t...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adaptive gradient approaches that automatically adjust the learning rate on a
per-feature basis have been very popular for training deep networks. This rich
class of algorithms includes Adagrad, RMSprop, Adam, and recent extensions. All
these algorithms have adopted diagonal matrix adaptation, due to the
prohibitive computational burden of manipulating full matrices in
high-dimensions. In this paper, we show that block-diagonal matrix adaptation
can be a practical and powerful solution that can effectively utilize
structural characteristics of deep learning architectures, and significantly
improve convergence and out-of-sample generalization. We present a general
framework with block-diagonal matrix updates via coordinate grouping, which
includes counterparts of the aforementioned algorithms, prove their convergence
in non-convex optimization, highlighting benefits compared to diagonal
versions. In addition, we propose an efficient spectrum-clipping scheme that
benefits from superior generalization performance of Sgd. Extensive experiments
reveal that block-diagonal approaches achieve state-of-the-art results on
several deep learning tasks, and can outperform adaptive diagonal methods,
vanilla Sgd, as well as a modified version of full-matrix adaptation proposed
very recently. |
---|---|
DOI: | 10.48550/arxiv.1905.10757 |