Theoretical limitations of multi-layer Transformer
Transformers, especially the decoder-only variants, are the backbone of most modern large language models; yet we do not have much understanding of their expressive power except for the simple $1$-layer case. Due to the difficulty of analyzing multi-layer models, all previous work relies on unproven...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformers, especially the decoder-only variants, are the backbone of most
modern large language models; yet we do not have much understanding of their
expressive power except for the simple $1$-layer case.
Due to the difficulty of analyzing multi-layer models, all previous work
relies on unproven complexity conjectures to show limitations for multi-layer
Transformers. In this work, we prove the first $\textit{unconditional}$ lower
bound against multi-layer decoder-only transformers. For any constant $L$, we
prove that any $L$-layer decoder-only transformer needs a polynomial model
dimension ($n^{\Omega(1)}$) to perform sequential composition of $L$ functions
over an input of $n$ tokens.
As a consequence, our results give: (1) the first depth-width trade-off for
multi-layer transformers, exhibiting that the $L$-step composition task is
exponentially harder for $L$-layer models compared to $(L+1)$-layer ones; (2)
an unconditional separation between encoder and decoder, exhibiting a hard task
for decoders that can be solved by an exponentially shallower and smaller
encoder; (3) a provable advantage of chain-of-thought, exhibiting a task that
becomes exponentially easier with chain-of-thought.
On the technical side, we propose the multi-party $\textit{autoregressive}$
$\textit{communication}$ $\textit{model}$ that captures the computation of a
decoder-only Transformer. We also introduce a new proof technique that finds a
certain $\textit{indistinguishable}$ $\textit{decomposition}$ of all possible
inputs iteratively for proving lower bounds in this model. We believe our new
communication model and proof technique will be helpful to further understand
the computational power of transformers. |
---|---|
DOI: | 10.48550/arxiv.2412.02975 |