Gramian Attention Heads are Strong yet Efficient Vision Learners
We introduce a novel architecture design that enhances expressiveness by incorporating multiple head classifiers (\ie, classification heads) instead of relying on channel expansion or additional building blocks. Our approach employs attention-based aggregation, utilizing pairwise feature similarity...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a novel architecture design that enhances expressiveness by
incorporating multiple head classifiers (\ie, classification heads) instead of
relying on channel expansion or additional building blocks. Our approach
employs attention-based aggregation, utilizing pairwise feature similarity to
enhance multiple lightweight heads with minimal resource overhead. We compute
the Gramian matrices to reinforce class tokens in an attention layer for each
head. This enables the heads to learn more discriminative representations,
enhancing their aggregation capabilities. Furthermore, we propose a learning
algorithm that encourages heads to complement each other by reducing
correlation for aggregation. Our models eventually surpass state-of-the-art
CNNs and ViTs regarding the accuracy-throughput trade-off on ImageNet-1K and
deliver remarkable performance across various downstream tasks, such as COCO
object instance segmentation, ADE20k semantic segmentation, and fine-grained
visual classification datasets. The effectiveness of our framework is
substantiated by practical experimental results and further underpinned by
generalization error bound. We release the code publicly at:
https://github.com/Lab-LVM/imagenet-models. |
---|---|
DOI: | 10.48550/arxiv.2310.16483 |