Navigating Extremes: Dynamic Sparsity in Large Output Space
In recent years, Dynamic Sparse Training (DST) has emerged as an alternative to post-training pruning for generating efficient models. In principle, DST allows for a more memory efficient training process, as it maintains sparsity throughout the entire training run. However, current DST implementati...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, Dynamic Sparse Training (DST) has emerged as an alternative
to post-training pruning for generating efficient models. In principle, DST
allows for a more memory efficient training process, as it maintains sparsity
throughout the entire training run. However, current DST implementations fail
to capitalize on this in practice. Because sparse matrix multiplication is much
less efficient than dense matrix multiplication on GPUs, most implementations
simulate sparsity by masking weights. In this paper, we leverage recent
advances in semi-structured sparse training to apply DST in the domain of
classification with large output spaces, where memory-efficiency is paramount.
With a label space of possibly millions of candidates, the classification layer
alone will consume several gigabytes of memory. Switching from a dense to a
fixed fan-in sparse layer updated with sparse evolutionary training (SET);
however, severely hampers training convergence, especially at the largest label
spaces. We find that poor gradient flow from the sparse classifier to the dense
text encoder make it difficult to learn good input representations. By
employing an intermediate layer or adding an auxiliary training objective, we
recover most of the generalisation performance of the dense model. Overall, we
demonstrate the applicability and practical benefits of DST in a challenging
domain -- characterized by a highly skewed label distribution that differs
substantially from typical DST benchmark datasets -- which enables end-to-end
training with millions of labels on commodity hardware. |
---|---|
DOI: | 10.48550/arxiv.2411.03171 |