Multi-manifold Attention for Vision Transformers
Vision Transformers are very popular nowadays due to their state-of-the-art performance in several computer vision tasks, such as image classification and action recognition. Although their performance has been greatly enhanced through highly descriptive patch embeddings and hierarchical structures,...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision Transformers are very popular nowadays due to their state-of-the-art
performance in several computer vision tasks, such as image classification and
action recognition. Although their performance has been greatly enhanced
through highly descriptive patch embeddings and hierarchical structures, there
is still limited research on utilizing additional data representations so as to
refine the selfattention map of a Transformer. To address this problem, a novel
attention mechanism, called multi-manifold multihead attention, is proposed in
this work to substitute the vanilla self-attention of a Transformer. The
proposed mechanism models the input space in three distinct manifolds, namely
Euclidean, Symmetric Positive Definite and Grassmann, thus leveraging different
statistical and geometrical properties of the input for the computation of a
highly descriptive attention map. In this way, the proposed attention mechanism
can guide a Vision Transformer to become more attentive towards important
appearance, color and texture features of an image, leading to improved
classification and segmentation results, as shown by the experimental results
on well-known datasets. |
---|---|
DOI: | 10.48550/arxiv.2207.08569 |