The geometry of hidden representations of large transformer models
Large transformers are powerful architectures used for self-supervised data analysis across various data types, including protein sequences, images, and text. In these models, the semantic structure of the dataset emerges from a sequence of transformations between one representation and the next. We...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large transformers are powerful architectures used for self-supervised data
analysis across various data types, including protein sequences, images, and
text. In these models, the semantic structure of the dataset emerges from a
sequence of transformations between one representation and the next. We
characterize the geometric and statistical properties of these representations
and how they change as we move through the layers. By analyzing the intrinsic
dimension (ID) and neighbor composition, we find that the representations
evolve similarly in transformers trained on protein language tasks and image
reconstruction tasks. In the first layers, the data manifold expands, becoming
high-dimensional, and then contracts significantly in the intermediate layers.
In the last part of the model, the ID remains approximately constant or forms a
second shallow peak. We show that the semantic information of the dataset is
better expressed at the end of the first peak, and this phenomenon can be
observed across many models trained on diverse datasets. Based on our findings,
we point out an explicit strategy to identify, without supervision, the layers
that maximize semantic content: representations at intermediate layers
corresponding to a relative minimum of the ID profile are more suitable for
downstream learning tasks. |
---|---|
DOI: | 10.48550/arxiv.2302.00294 |