SaiT: Sparse Vision Transformers through Adaptive Token Pruning
While vision transformers have achieved impressive results, effectively and efficiently accelerating these models can further boost performances. In this work, we propose a dense/sparse training framework to obtain a unified model, enabling weight sharing across various token densities. Thus one mod...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-10 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Li, Ling Thorsley, David Hassoun, Joseph |
description | While vision transformers have achieved impressive results, effectively and efficiently accelerating these models can further boost performances. In this work, we propose a dense/sparse training framework to obtain a unified model, enabling weight sharing across various token densities. Thus one model offers a range of accuracy and throughput tradeoffs for different applications. Besides, we introduce adaptive token pruning to optimize the patch token sparsity based on the input image. In addition, we investigate knowledge distillation to enhance token selection capability in early transformer modules. Sparse adaptive image Transformer (SaiT) offers varying levels of model acceleration by merely changing the token sparsity on the fly. Specifically, SaiT reduces the computation complexity (FLOPs) by 39% - 43% and increases the throughput by 67% - 91% with less than 0.5% accuracy loss for various vision transformer models. Meanwhile, the same model also provides the zero accuracy drop option by skipping the sparsification step. SaiT achieves better accuracy and computation tradeoffs than state-of-the-art transformer and convolutional models. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2724396099</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2724396099</sourcerecordid><originalsourceid>FETCH-proquest_journals_27243960993</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtOwScC_GmP9ZFRBRHoaFrCZi2qZrUexufXwcfwOkM35mxCKTcJNsUYMFiokEIAXkBWSYjtq-0VTtejRrJ8NqS9Y4r1I5aj0-DxKcefeh6frjpcbJvw5W_G8evGJx13YrNW_0gE_-6ZOvzSR0vyYj-FQxNzeADui81UEAqy1yUpfzv-gDpDDi0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2724396099</pqid></control><display><type>article</type><title>SaiT: Sparse Vision Transformers through Adaptive Token Pruning</title><source>Free E- Journals</source><creator>Li, Ling ; Thorsley, David ; Hassoun, Joseph</creator><creatorcontrib>Li, Ling ; Thorsley, David ; Hassoun, Joseph</creatorcontrib><description>While vision transformers have achieved impressive results, effectively and efficiently accelerating these models can further boost performances. In this work, we propose a dense/sparse training framework to obtain a unified model, enabling weight sharing across various token densities. Thus one model offers a range of accuracy and throughput tradeoffs for different applications. Besides, we introduce adaptive token pruning to optimize the patch token sparsity based on the input image. In addition, we investigate knowledge distillation to enhance token selection capability in early transformer modules. Sparse adaptive image Transformer (SaiT) offers varying levels of model acceleration by merely changing the token sparsity on the fly. Specifically, SaiT reduces the computation complexity (FLOPs) by 39% - 43% and increases the throughput by 67% - 91% with less than 0.5% accuracy loss for various vision transformer models. Meanwhile, the same model also provides the zero accuracy drop option by skipping the sparsification step. SaiT achieves better accuracy and computation tradeoffs than state-of-the-art transformer and convolutional models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Computation ; Distillation ; Image enhancement ; Pruning ; Sparsity ; Tradeoffs</subject><ispartof>arXiv.org, 2022-10</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Li, Ling</creatorcontrib><creatorcontrib>Thorsley, David</creatorcontrib><creatorcontrib>Hassoun, Joseph</creatorcontrib><title>SaiT: Sparse Vision Transformers through Adaptive Token Pruning</title><title>arXiv.org</title><description>While vision transformers have achieved impressive results, effectively and efficiently accelerating these models can further boost performances. In this work, we propose a dense/sparse training framework to obtain a unified model, enabling weight sharing across various token densities. Thus one model offers a range of accuracy and throughput tradeoffs for different applications. Besides, we introduce adaptive token pruning to optimize the patch token sparsity based on the input image. In addition, we investigate knowledge distillation to enhance token selection capability in early transformer modules. Sparse adaptive image Transformer (SaiT) offers varying levels of model acceleration by merely changing the token sparsity on the fly. Specifically, SaiT reduces the computation complexity (FLOPs) by 39% - 43% and increases the throughput by 67% - 91% with less than 0.5% accuracy loss for various vision transformer models. Meanwhile, the same model also provides the zero accuracy drop option by skipping the sparsification step. SaiT achieves better accuracy and computation tradeoffs than state-of-the-art transformer and convolutional models.</description><subject>Accuracy</subject><subject>Computation</subject><subject>Distillation</subject><subject>Image enhancement</subject><subject>Pruning</subject><subject>Sparsity</subject><subject>Tradeoffs</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtOwScC_GmP9ZFRBRHoaFrCZi2qZrUexufXwcfwOkM35mxCKTcJNsUYMFiokEIAXkBWSYjtq-0VTtejRrJ8NqS9Y4r1I5aj0-DxKcefeh6frjpcbJvw5W_G8evGJx13YrNW_0gE_-6ZOvzSR0vyYj-FQxNzeADui81UEAqy1yUpfzv-gDpDDi0</recordid><startdate>20221011</startdate><enddate>20221011</enddate><creator>Li, Ling</creator><creator>Thorsley, David</creator><creator>Hassoun, Joseph</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221011</creationdate><title>SaiT: Sparse Vision Transformers through Adaptive Token Pruning</title><author>Li, Ling ; Thorsley, David ; Hassoun, Joseph</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27243960993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Computation</topic><topic>Distillation</topic><topic>Image enhancement</topic><topic>Pruning</topic><topic>Sparsity</topic><topic>Tradeoffs</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Ling</creatorcontrib><creatorcontrib>Thorsley, David</creatorcontrib><creatorcontrib>Hassoun, Joseph</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Ling</au><au>Thorsley, David</au><au>Hassoun, Joseph</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>SaiT: Sparse Vision Transformers through Adaptive Token Pruning</atitle><jtitle>arXiv.org</jtitle><date>2022-10-11</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>While vision transformers have achieved impressive results, effectively and efficiently accelerating these models can further boost performances. In this work, we propose a dense/sparse training framework to obtain a unified model, enabling weight sharing across various token densities. Thus one model offers a range of accuracy and throughput tradeoffs for different applications. Besides, we introduce adaptive token pruning to optimize the patch token sparsity based on the input image. In addition, we investigate knowledge distillation to enhance token selection capability in early transformer modules. Sparse adaptive image Transformer (SaiT) offers varying levels of model acceleration by merely changing the token sparsity on the fly. Specifically, SaiT reduces the computation complexity (FLOPs) by 39% - 43% and increases the throughput by 67% - 91% with less than 0.5% accuracy loss for various vision transformer models. Meanwhile, the same model also provides the zero accuracy drop option by skipping the sparsification step. SaiT achieves better accuracy and computation tradeoffs than state-of-the-art transformer and convolutional models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2724396099 |
source | Free E- Journals |
subjects | Accuracy Computation Distillation Image enhancement Pruning Sparsity Tradeoffs |
title | SaiT: Sparse Vision Transformers through Adaptive Token Pruning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T07%3A33%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=SaiT:%20Sparse%20Vision%20Transformers%20through%20Adaptive%20Token%20Pruning&rft.jtitle=arXiv.org&rft.au=Li,%20Ling&rft.date=2022-10-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2724396099%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2724396099&rft_id=info:pmid/&rfr_iscdi=true |