Analyzing Transformer Dynamics as Movement through Embedding Space
Transformer based language models exhibit intelligent behaviors such as understanding natural language, recognizing patterns, acquiring knowledge, reasoning, planning, reflecting and using tools. This paper explores how their underlying mechanics give rise to intelligent behaviors. Towards that end,...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformer based language models exhibit intelligent behaviors such as
understanding natural language, recognizing patterns, acquiring knowledge,
reasoning, planning, reflecting and using tools. This paper explores how their
underlying mechanics give rise to intelligent behaviors. Towards that end, we
propose framing Transformer dynamics as movement through embedding space.
Examining Transformers through this perspective reveals key insights,
establishing a Theory of Transformers: 1) Intelligent behaviours map to paths
in Embedding Space which, the Transformer random-walks through during
inferencing. 2) LM training learns a probability distribution over all possible
paths. `Intelligence' is learnt by assigning higher probabilities to paths
representing intelligent behaviors. No learning can take place in-context;
context only narrows the subset of paths sampled during decoding. 5) The
Transformer is a self-mapping composition function, folding a context sequence
into a context-vector such that it's proximity to a token-vector reflects its
co-occurrence and conditioned probability. Thus, the physical arrangement of
vectors in Embedding Space determines path probabilities. 6) Context vectors
are composed by aggregating features of the sequence's tokens via a process we
call the encoding walk. Attention contributes a - potentially redundant -
association-bias to this process. 7) This process is comprised of two principal
operation types: filtering (data independent) and aggregation (data dependent).
This generalization unifies Transformers with other sequence models. Building
upon this foundation, we formalize a popular semantic interpretation of
embeddings into a ``concept-space theory'' and find some evidence of it's
validity. |
---|---|
DOI: | 10.48550/arxiv.2308.10874 |