Efficient Attention: Attention with Linear Complexities
Dot-product attention has wide applications in computer vision and natural language processing. However, its memory and computational costs grow quadratically with the input size. Such growth prohibits its application on high-resolution inputs. To remedy this drawback, this paper proposes a novel ef...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Dot-product attention has wide applications in computer vision and natural
language processing. However, its memory and computational costs grow
quadratically with the input size. Such growth prohibits its application on
high-resolution inputs. To remedy this drawback, this paper proposes a novel
efficient attention mechanism equivalent to dot-product attention but with
substantially less memory and computational costs. Its resource efficiency
allows more widespread and flexible integration of attention modules into a
network, which leads to better accuracies. Empirical evaluations demonstrated
the effectiveness of its advantages. Efficient attention modules brought
significant performance boosts to object detectors and instance segmenters on
MS-COCO 2017. Further, the resource efficiency democratizes attention to
complex models, where high costs prohibit the use of dot-product attention. As
an exemplar, a model with efficient attention achieved state-of-the-art
accuracies for stereo depth estimation on the Scene Flow dataset. Code is
available at https://github.com/cmsflash/efficient-attention. |
---|---|
DOI: | 10.48550/arxiv.1812.01243 |