Anisotropy Is Inherent to Self-Attention in Transformers
The representation degeneration problem is a phenomenon that is widely observed among self-supervised learning methods based on Transformers. In NLP, it takes the form of anisotropy, a singular property of hidden representations which makes them unexpectedly close to each other in terms of angular d...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The representation degeneration problem is a phenomenon that is widely
observed among self-supervised learning methods based on Transformers. In NLP,
it takes the form of anisotropy, a singular property of hidden representations
which makes them unexpectedly close to each other in terms of angular distance
(cosine-similarity). Some recent works tend to show that anisotropy is a
consequence of optimizing the cross-entropy loss on long-tailed distributions
of tokens. We show in this paper that anisotropy can also be observed
empirically in language models with specific objectives that should not suffer
directly from the same consequences. We also show that the anisotropy problem
extends to Transformers trained on other modalities. Our observations suggest
that anisotropy is actually inherent to Transformers-based models. |
---|---|
DOI: | 10.48550/arxiv.2401.12143 |