When Shift Operation Meets Vision Transformer: An Extremely Simple Alternative to Attention Mechanism
Attention mechanism has been widely believed as the key to success of vision transformers (ViTs), since it provides a flexible and powerful way to model spatial relationships. However, is the attention mechanism truly an indispensable part of ViT? Can it be replaced by some other alternatives? To de...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Attention mechanism has been widely believed as the key to success of vision
transformers (ViTs), since it provides a flexible and powerful way to model
spatial relationships. However, is the attention mechanism truly an
indispensable part of ViT? Can it be replaced by some other alternatives? To
demystify the role of attention mechanism, we simplify it into an extremely
simple case: ZERO FLOP and ZERO parameter. Concretely, we revisit the shift
operation. It does not contain any parameter or arithmetic calculation. The
only operation is to exchange a small portion of the channels between
neighboring features. Based on this simple operation, we construct a new
backbone network, namely ShiftViT, where the attention layers in ViT are
substituted by shift operations. Surprisingly, ShiftViT works quite well in
several mainstream tasks, e.g., classification, detection, and segmentation.
The performance is on par with or even better than the strong baseline Swin
Transformer. These results suggest that the attention mechanism might not be
the vital factor that makes ViT successful. It can be even replaced by a
zero-parameter operation. We should pay more attentions to the remaining parts
of ViT in the future work. Code is available at github.com/microsoft/SPACH. |
---|---|
DOI: | 10.48550/arxiv.2201.10801 |