Brainformers: Trading Simplicity for Efficiency
Transformers are central to recent successes in natural language processing and computer vision. Transformers have a mostly uniform backbone where layers alternate between feed-forward and self-attention in order to build a deep network. Here we investigate this design choice and find that more comp...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformers are central to recent successes in natural language processing
and computer vision. Transformers have a mostly uniform backbone where layers
alternate between feed-forward and self-attention in order to build a deep
network. Here we investigate this design choice and find that more complex
blocks that have different permutations of layer primitives can be more
efficient. Using this insight, we develop a complex block, named Brainformer,
that consists of a diverse sets of layers such as sparsely gated feed-forward
layers, dense feed-forward layers, attention layers, and various forms of layer
normalization and activation functions. Brainformer consistently outperforms
the state-of-the-art dense and sparse Transformers, in terms of both quality
and efficiency. A Brainformer model with 8 billion activated parameters per
token demonstrates 2x faster training convergence and 5x faster step time
compared to its GLaM counterpart. In downstream task evaluation, Brainformer
also demonstrates a 3% higher SuperGLUE score with fine-tuning compared to GLaM
with a similar number of activated parameters. Finally, Brainformer largely
outperforms a Primer dense model derived with NAS with similar computation per
token on fewshot evaluations. |
---|---|
DOI: | 10.48550/arxiv.2306.00008 |