Benign Overfitting in Single-Head Attention
The phenomenon of benign overfitting, where a trained neural network perfectly fits noisy training data but still achieves near-optimal test performance, has been extensively studied in recent years for linear models and fully-connected/convolutional networks. In this work, we study benign overfitti...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The phenomenon of benign overfitting, where a trained neural network
perfectly fits noisy training data but still achieves near-optimal test
performance, has been extensively studied in recent years for linear models and
fully-connected/convolutional networks. In this work, we study benign
overfitting in a single-head softmax attention model, which is the fundamental
building block of Transformers. We prove that under appropriate conditions, the
model exhibits benign overfitting in a classification setting already after two
steps of gradient descent. Moreover, we show conditions where a
minimum-norm/maximum-margin interpolator exhibits benign overfitting. We study
how the overfitting behavior depends on the signal-to-noise ratio (SNR) of the
data distribution, namely, the ratio between norms of signal and noise tokens,
and prove that a sufficiently large SNR is both necessary and sufficient for
benign overfitting. |
---|---|
DOI: | 10.48550/arxiv.2410.07746 |