Exploring Low Rank Training of Deep Neural Networks
Training deep neural networks in low rank, i.e. with factorised layers, is of particular interest to the community: it offers efficiency over unfactorised training in terms of both memory consumption and training time. Prior work has focused on low rank approximations of pre-trained networks and tra...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training deep neural networks in low rank, i.e. with factorised layers, is of
particular interest to the community: it offers efficiency over unfactorised
training in terms of both memory consumption and training time. Prior work has
focused on low rank approximations of pre-trained networks and training in low
rank space with additional objectives, offering various ad hoc explanations for
chosen practice. We analyse techniques that work well in practice, and through
extensive ablations on models such as GPT2 we provide evidence falsifying
common beliefs in the field, hinting in the process at exciting research
opportunities that still need answering. |
---|---|
DOI: | 10.48550/arxiv.2209.13569 |