Mixture of Hidden-Dimensions Transformer
Transformer models encounter challenges in scaling hidden dimensions efficiently, as uniformly increasing them inflates computational and memory costs while failing to emphasize the most relevant features for each token. For further understanding, we study hidden dimension sparsity and observe that...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformer models encounter challenges in scaling hidden dimensions
efficiently, as uniformly increasing them inflates computational and memory
costs while failing to emphasize the most relevant features for each token. For
further understanding, we study hidden dimension sparsity and observe that
trained Transformers utilize only a small fraction of token dimensions,
revealing an "activation flow" pattern. Notably, there are shared
sub-dimensions with sustained activation across multiple consecutive tokens and
specialized sub-dimensions uniquely activated for each token. To better model
token-relevant sub-dimensions, we propose MoHD (Mixture of Hidden Dimensions),
a sparse conditional activation architecture. Particularly, MoHD employs shared
sub-dimensions for common token features and a routing mechanism to dynamically
activate specialized sub-dimensions. To mitigate potential information loss
from sparsity, we design activation scaling and group fusion mechanisms to
preserve activation flow. In this way, MoHD expands hidden dimensions with
negligible increases in computation or parameters, efficient training and
inference while maintaining performance. Evaluations across 10 NLP tasks show
that MoHD surpasses Vanilla Transformers in parameter efficiency and task
performance. It achieves 1.7% higher performance with 50% fewer activation
parameters and 3.7% higher performance with a 3x parameter expansion at
constant activation cost. MOHD offers a new perspective for scaling the model,
showcasing the potential of hidden dimension sparsity to boost efficiency |
---|---|
DOI: | 10.48550/arxiv.2412.05644 |