SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection

Vision transformers are known to be more computationally and data-intensive than CNN models. These transformer models such as ViT, require all the input image tokens to learn the relationship among them. However, many of these tokens are not informative and may contain irrelevant information such as...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ataiefard, Foozhan, Ahmed, Walid, Hajimolahoseini, Habib, Asani, Saina, Javadi, Farnoosh, Hassanpour, Mohammad, Awad, Omar Mohamed, Wen, Austin, Liu, Kangling, Liu, Yang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ataiefard, Foozhan
Ahmed, Walid
Hajimolahoseini, Habib
Asani, Saina
Javadi, Farnoosh
Hassanpour, Mohammad
Awad, Omar Mohamed
Wen, Austin
Liu, Kangling
Liu, Yang
description Vision transformers are known to be more computationally and data-intensive than CNN models. These transformer models such as ViT, require all the input image tokens to learn the relationship among them. However, many of these tokens are not informative and may contain irrelevant information such as unrelated background or unimportant scenery. These tokens are overlooked by the multi-head self-attention (MHSA), resulting in many redundant and unnecessary computations in MHSA and the feed-forward network (FFN). In this work, we propose a method to optimize the amount of unnecessary interactions between unimportant tokens by separating and sending them through a different low-cost computational path. Our method does not add any parameters to the ViT model and aims to find the best trade-off between training throughput and achieving a 0% loss in the Top-1 accuracy of the final model. Our experimental results on training ViT-small from scratch show that SkipViT is capable of effectively dropping 55% of the tokens while gaining more than 13% training throughput and maintaining classification accuracy at the level of the baseline model on Huawei Ascend910A.
doi_str_mv 10.48550/arxiv.2401.15293
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_15293</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_15293</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-9104953e75acf04db35c4bbb6b4db8c5568002cf9c5fc5a2276fb58dc58162be3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIJf10nYoYiXiMSiptvIdq7BautETlXg7-mD1cxizkiHkBvOSlUDsDubf-K-FIrxkoNo5CV5W67jtIrmni4nxCGmT_ox0VWc45ioyTbNYcxbzDP9jrsvaqkZ15iKDve4oUeWtmNK6HeH_RW5CHYz4_V_Loh5ejTtS9G9P7-2D11hdSWLhjPVgMQKrA9MDU6CV8457Q699gC6Zkz40HgIHqwQlQ4O6sFDzbVwKBfk9nx7sumnHLc2__ZHq_5kJf8AQSdHiA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection</title><source>arXiv.org</source><creator>Ataiefard, Foozhan ; Ahmed, Walid ; Hajimolahoseini, Habib ; Asani, Saina ; Javadi, Farnoosh ; Hassanpour, Mohammad ; Awad, Omar Mohamed ; Wen, Austin ; Liu, Kangling ; Liu, Yang</creator><creatorcontrib>Ataiefard, Foozhan ; Ahmed, Walid ; Hajimolahoseini, Habib ; Asani, Saina ; Javadi, Farnoosh ; Hassanpour, Mohammad ; Awad, Omar Mohamed ; Wen, Austin ; Liu, Kangling ; Liu, Yang</creatorcontrib><description>Vision transformers are known to be more computationally and data-intensive than CNN models. These transformer models such as ViT, require all the input image tokens to learn the relationship among them. However, many of these tokens are not informative and may contain irrelevant information such as unrelated background or unimportant scenery. These tokens are overlooked by the multi-head self-attention (MHSA), resulting in many redundant and unnecessary computations in MHSA and the feed-forward network (FFN). In this work, we propose a method to optimize the amount of unnecessary interactions between unimportant tokens by separating and sending them through a different low-cost computational path. Our method does not add any parameters to the ViT model and aims to find the best trade-off between training throughput and achieving a 0% loss in the Top-1 accuracy of the final model. Our experimental results on training ViT-small from scratch show that SkipViT is capable of effectively dropping 55% of the tokens while gaining more than 13% training throughput and maintaining classification accuracy at the level of the baseline model on Huawei Ascend910A.</description><identifier>DOI: 10.48550/arxiv.2401.15293</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2024-01</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.15293$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.15293$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ataiefard, Foozhan</creatorcontrib><creatorcontrib>Ahmed, Walid</creatorcontrib><creatorcontrib>Hajimolahoseini, Habib</creatorcontrib><creatorcontrib>Asani, Saina</creatorcontrib><creatorcontrib>Javadi, Farnoosh</creatorcontrib><creatorcontrib>Hassanpour, Mohammad</creatorcontrib><creatorcontrib>Awad, Omar Mohamed</creatorcontrib><creatorcontrib>Wen, Austin</creatorcontrib><creatorcontrib>Liu, Kangling</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><title>SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection</title><description>Vision transformers are known to be more computationally and data-intensive than CNN models. These transformer models such as ViT, require all the input image tokens to learn the relationship among them. However, many of these tokens are not informative and may contain irrelevant information such as unrelated background or unimportant scenery. These tokens are overlooked by the multi-head self-attention (MHSA), resulting in many redundant and unnecessary computations in MHSA and the feed-forward network (FFN). In this work, we propose a method to optimize the amount of unnecessary interactions between unimportant tokens by separating and sending them through a different low-cost computational path. Our method does not add any parameters to the ViT model and aims to find the best trade-off between training throughput and achieving a 0% loss in the Top-1 accuracy of the final model. Our experimental results on training ViT-small from scratch show that SkipViT is capable of effectively dropping 55% of the tokens while gaining more than 13% training throughput and maintaining classification accuracy at the level of the baseline model on Huawei Ascend910A.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIJf10nYoYiXiMSiptvIdq7BautETlXg7-mD1cxizkiHkBvOSlUDsDubf-K-FIrxkoNo5CV5W67jtIrmni4nxCGmT_ox0VWc45ioyTbNYcxbzDP9jrsvaqkZ15iKDve4oUeWtmNK6HeH_RW5CHYz4_V_Loh5ejTtS9G9P7-2D11hdSWLhjPVgMQKrA9MDU6CV8457Q699gC6Zkz40HgIHqwQlQ4O6sFDzbVwKBfk9nx7sumnHLc2__ZHq_5kJf8AQSdHiA</recordid><startdate>20240126</startdate><enddate>20240126</enddate><creator>Ataiefard, Foozhan</creator><creator>Ahmed, Walid</creator><creator>Hajimolahoseini, Habib</creator><creator>Asani, Saina</creator><creator>Javadi, Farnoosh</creator><creator>Hassanpour, Mohammad</creator><creator>Awad, Omar Mohamed</creator><creator>Wen, Austin</creator><creator>Liu, Kangling</creator><creator>Liu, Yang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240126</creationdate><title>SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection</title><author>Ataiefard, Foozhan ; Ahmed, Walid ; Hajimolahoseini, Habib ; Asani, Saina ; Javadi, Farnoosh ; Hassanpour, Mohammad ; Awad, Omar Mohamed ; Wen, Austin ; Liu, Kangling ; Liu, Yang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-9104953e75acf04db35c4bbb6b4db8c5568002cf9c5fc5a2276fb58dc58162be3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ataiefard, Foozhan</creatorcontrib><creatorcontrib>Ahmed, Walid</creatorcontrib><creatorcontrib>Hajimolahoseini, Habib</creatorcontrib><creatorcontrib>Asani, Saina</creatorcontrib><creatorcontrib>Javadi, Farnoosh</creatorcontrib><creatorcontrib>Hassanpour, Mohammad</creatorcontrib><creatorcontrib>Awad, Omar Mohamed</creatorcontrib><creatorcontrib>Wen, Austin</creatorcontrib><creatorcontrib>Liu, Kangling</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ataiefard, Foozhan</au><au>Ahmed, Walid</au><au>Hajimolahoseini, Habib</au><au>Asani, Saina</au><au>Javadi, Farnoosh</au><au>Hassanpour, Mohammad</au><au>Awad, Omar Mohamed</au><au>Wen, Austin</au><au>Liu, Kangling</au><au>Liu, Yang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection</atitle><date>2024-01-26</date><risdate>2024</risdate><abstract>Vision transformers are known to be more computationally and data-intensive than CNN models. These transformer models such as ViT, require all the input image tokens to learn the relationship among them. However, many of these tokens are not informative and may contain irrelevant information such as unrelated background or unimportant scenery. These tokens are overlooked by the multi-head self-attention (MHSA), resulting in many redundant and unnecessary computations in MHSA and the feed-forward network (FFN). In this work, we propose a method to optimize the amount of unnecessary interactions between unimportant tokens by separating and sending them through a different low-cost computational path. Our method does not add any parameters to the ViT model and aims to find the best trade-off between training throughput and achieving a 0% loss in the Top-1 accuracy of the final model. Our experimental results on training ViT-small from scratch show that SkipViT is capable of effectively dropping 55% of the tokens while gaining more than 13% training throughput and maintaining classification accuracy at the level of the baseline model on Huawei Ascend910A.</abstract><doi>10.48550/arxiv.2401.15293</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2401.15293
ispartof
issn
language eng
recordid cdi_arxiv_primary_2401_15293
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T23%3A36%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SkipViT:%20Speeding%20Up%20Vision%20Transformers%20with%20a%20Token-Level%20Skip%20Connection&rft.au=Ataiefard,%20Foozhan&rft.date=2024-01-26&rft_id=info:doi/10.48550/arxiv.2401.15293&rft_dat=%3Carxiv_GOX%3E2401_15293%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true