SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection
Vision transformers are known to be more computationally and data-intensive than CNN models. These transformer models such as ViT, require all the input image tokens to learn the relationship among them. However, many of these tokens are not informative and may contain irrelevant information such as...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision transformers are known to be more computationally and data-intensive
than CNN models. These transformer models such as ViT, require all the input
image tokens to learn the relationship among them. However, many of these
tokens are not informative and may contain irrelevant information such as
unrelated background or unimportant scenery. These tokens are overlooked by the
multi-head self-attention (MHSA), resulting in many redundant and unnecessary
computations in MHSA and the feed-forward network (FFN). In this work, we
propose a method to optimize the amount of unnecessary interactions between
unimportant tokens by separating and sending them through a different low-cost
computational path. Our method does not add any parameters to the ViT model and
aims to find the best trade-off between training throughput and achieving a 0%
loss in the Top-1 accuracy of the final model. Our experimental results on
training ViT-small from scratch show that SkipViT is capable of effectively
dropping 55% of the tokens while gaining more than 13% training throughput and
maintaining classification accuracy at the level of the baseline model on
Huawei Ascend910A. |
---|---|
DOI: | 10.48550/arxiv.2401.15293 |