MaiT: Leverage Attention Masks for More Efficient Image Transformers
Though image transformers have shown competitive results with convolutional neural networks in computer vision tasks, lacking inductive biases such as locality still poses problems in terms of model efficiency especially for embedded applications. In this work, we address this issue by introducing a...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Though image transformers have shown competitive results with convolutional
neural networks in computer vision tasks, lacking inductive biases such as
locality still poses problems in terms of model efficiency especially for
embedded applications. In this work, we address this issue by introducing
attention masks to incorporate spatial locality into self-attention heads.
Local dependencies are captured efficiently with masked attention heads along
with global dependencies captured by unmasked attention heads. With Masked
attention image Transformer - MaiT, top-1 accuracy increases by up to 1.7%
compared to CaiT with fewer parameters and FLOPs, and the throughput improves
by up to 1.5X compared to Swin. Encoding locality with attention masks is model
agnostic, and thus it applies to monolithic, hierarchical, or other novel
transformer architectures. |
---|---|
DOI: | 10.48550/arxiv.2207.03006 |