FNet: Mixing Tokens with Fourier Transforms
We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We show that Transformer encoder architectures can be sped up, with limited
accuracy costs, by replacing the self-attention sublayers with simple linear
transformations that "mix" input tokens. These linear mixers, along with
standard nonlinearities in feed-forward layers, prove competent at modeling
semantic relationships in several text classification tasks. Most surprisingly,
we find that replacing the self-attention sublayer in a Transformer encoder
with a standard, unparameterized Fourier Transform achieves 92-97% of the
accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on
GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input
lengths, our FNet model is significantly faster: when compared to the
"efficient" Transformers on the Long Range Arena benchmark, FNet matches the
accuracy of the most accurate models, while outpacing the fastest models across
all sequence lengths on GPUs (and across relatively shorter lengths on TPUs).
Finally, FNet has a light memory footprint and is particularly efficient at
smaller model sizes; for a fixed speed and accuracy budget, small FNet models
outperform Transformer counterparts. |
---|---|
DOI: | 10.48550/arxiv.2105.03824 |