Thermodynamic Natural Gradient Descent
Second-order training methods have better convergence properties than gradient descent but are rarely used in practice for large-scale training due to their computational overhead. This can be viewed as a hardware limitation (imposed by digital computers). Here we show that natural gradient descent...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Second-order training methods have better convergence properties than
gradient descent but are rarely used in practice for large-scale training due
to their computational overhead. This can be viewed as a hardware limitation
(imposed by digital computers). Here we show that natural gradient descent
(NGD), a second-order method, can have a similar computational complexity per
iteration to a first-order method, when employing appropriate hardware. We
present a new hybrid digital-analog algorithm for training neural networks that
is equivalent to NGD in a certain parameter regime but avoids prohibitively
costly linear system solves. Our algorithm exploits the thermodynamic
properties of an analog system at equilibrium, and hence requires an analog
thermodynamic computer. The training occurs in a hybrid digital-analog loop,
where the gradient and Fisher information matrix (or any other positive
semi-definite curvature matrix) are calculated at given time intervals while
the analog dynamics take place. We numerically demonstrate the superiority of
this approach over state-of-the-art digital first- and second-order training
methods on classification tasks and language model fine-tuning tasks. |
---|---|
DOI: | 10.48550/arxiv.2405.13817 |