Sketchy: Memory-efficient Adaptive Regularization with Frequent Directions
Adaptive regularization methods that exploit more than the diagonal entries exhibit state of the art performance for many tasks, but can be prohibitive in terms of memory and running time. We find the spectra of the Kronecker-factored gradient covariance matrix in deep learning (DL) training tasks a...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adaptive regularization methods that exploit more than the diagonal entries
exhibit state of the art performance for many tasks, but can be prohibitive in
terms of memory and running time. We find the spectra of the Kronecker-factored
gradient covariance matrix in deep learning (DL) training tasks are
concentrated on a small leading eigenspace that changes throughout training,
motivating a low-rank sketching approach. We describe a generic method for
reducing memory and compute requirements of maintaining a matrix preconditioner
using the Frequent Directions (FD) sketch. While previous approaches have
explored applying FD for second-order optimization, we present a novel analysis
which allows efficient interpolation between resource requirements and the
degradation in regret guarantees with rank $k$: in the online convex
optimization (OCO) setting over dimension $d$, we match full-matrix $d^2$
memory regret using only $dk$ memory up to additive error in the bottom $d-k$
eigenvalues of the gradient covariance. Further, we show extensions of our work
to Shampoo, resulting in a method competitive in quality with Shampoo and Adam,
yet requiring only sub-linear memory for tracking second moments. |
---|---|
DOI: | 10.48550/arxiv.2302.03764 |