ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices

Neural Architecture Search (NAS) has shown promising performance in the automatic design of vision transformers (ViT) exceeding 1G FLOPs. However, designing lightweight and low-latency ViT models for diverse mobile devices remains a big challenge. In this work, we propose ElasticViT, a two-stage NAS...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Tang, Chen, Zhang, Li Lyna, Jiang, Huiqiang, Xu, Jiahang, Cao, Ting, Zhang, Quanlu, Yang, Yuqing, Wang, Zhi, Yang, Mao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Tang, Chen
Zhang, Li Lyna
Jiang, Huiqiang
Xu, Jiahang
Cao, Ting
Zhang, Quanlu
Yang, Yuqing
Wang, Zhi
Yang, Mao
description Neural Architecture Search (NAS) has shown promising performance in the automatic design of vision transformers (ViT) exceeding 1G FLOPs. However, designing lightweight and low-latency ViT models for diverse mobile devices remains a big challenge. In this work, we propose ElasticViT, a two-stage NAS approach that trains a high-quality ViT supernet over a very large search space that supports a wide range of mobile devices, and then searches an optimal sub-network (subnet) for direct deployment. However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e.g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance. To address this challenge, we propose two novel sampling techniques: complexity-aware sampling and performance-aware sampling. Complexity-aware sampling limits the FLOPs difference among the subnets sampled across adjacent training steps, while covering different-sized subnets in the search space. Performance-aware sampling further selects subnets that have good accuracy, which can reduce gradient conflicts and improve supernet quality. Our discovered models, ElasticViT models, achieve top-1 accuracy from 67.2% to 80.0% on ImageNet from 60M to 800M FLOPs without extra retraining, outperforming all prior CNNs and ViTs in terms of accuracy and latency. Our tiny and small models are also the first ViT models that surpass state-of-the-art CNNs with significantly lower latency on mobile devices. For instance, ElasticViT-S1 runs 2.62x faster than EfficientNet-B0 with 0.1% higher accuracy.
doi_str_mv 10.48550/arxiv.2303.09730
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_09730</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_09730</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-1991ad1a978c115af2a2e48f1c137a42ca0e77c3785f836fe95db5ad5eb5827b3</originalsourceid><addsrcrecordid>eNotj71OwzAUhb0woMIDMOEXSLDjuHbYUNoCUhEDUdfoxrlGllwnskOgb09amI6Ozo_0EXLHWV5qKdkDxB8354VgImeVEuya-K2HNDlzcM0jrYdgvTNTBt8QkX58jRgDTrSJ4IILn9QOkW5w9MPp7HbLkh5cckM4V0Ja4iNGutiNmzEmpG9D5zwum9kZTDfkyoJPePuvK9Lstk39ku3fn1_rp30Ga8UyXlUceg6V0oZzCbaAAkttueFCQVkYYKiUEUpLq8XaYiX7TkIvsZO6UJ1Ykfu_2wtuO0Z3hHhqz9jtBVv8AlR0VAY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices</title><source>arXiv.org</source><creator>Tang, Chen ; Zhang, Li Lyna ; Jiang, Huiqiang ; Xu, Jiahang ; Cao, Ting ; Zhang, Quanlu ; Yang, Yuqing ; Wang, Zhi ; Yang, Mao</creator><creatorcontrib>Tang, Chen ; Zhang, Li Lyna ; Jiang, Huiqiang ; Xu, Jiahang ; Cao, Ting ; Zhang, Quanlu ; Yang, Yuqing ; Wang, Zhi ; Yang, Mao</creatorcontrib><description>Neural Architecture Search (NAS) has shown promising performance in the automatic design of vision transformers (ViT) exceeding 1G FLOPs. However, designing lightweight and low-latency ViT models for diverse mobile devices remains a big challenge. In this work, we propose ElasticViT, a two-stage NAS approach that trains a high-quality ViT supernet over a very large search space that supports a wide range of mobile devices, and then searches an optimal sub-network (subnet) for direct deployment. However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e.g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance. To address this challenge, we propose two novel sampling techniques: complexity-aware sampling and performance-aware sampling. Complexity-aware sampling limits the FLOPs difference among the subnets sampled across adjacent training steps, while covering different-sized subnets in the search space. Performance-aware sampling further selects subnets that have good accuracy, which can reduce gradient conflicts and improve supernet quality. Our discovered models, ElasticViT models, achieve top-1 accuracy from 67.2% to 80.0% on ImageNet from 60M to 800M FLOPs without extra retraining, outperforming all prior CNNs and ViTs in terms of accuracy and latency. Our tiny and small models are also the first ViT models that surpass state-of-the-art CNNs with significantly lower latency on mobile devices. For instance, ElasticViT-S1 runs 2.62x faster than EfficientNet-B0 with 0.1% higher accuracy.</description><identifier>DOI: 10.48550/arxiv.2303.09730</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-03</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.09730$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.09730$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Tang, Chen</creatorcontrib><creatorcontrib>Zhang, Li Lyna</creatorcontrib><creatorcontrib>Jiang, Huiqiang</creatorcontrib><creatorcontrib>Xu, Jiahang</creatorcontrib><creatorcontrib>Cao, Ting</creatorcontrib><creatorcontrib>Zhang, Quanlu</creatorcontrib><creatorcontrib>Yang, Yuqing</creatorcontrib><creatorcontrib>Wang, Zhi</creatorcontrib><creatorcontrib>Yang, Mao</creatorcontrib><title>ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices</title><description>Neural Architecture Search (NAS) has shown promising performance in the automatic design of vision transformers (ViT) exceeding 1G FLOPs. However, designing lightweight and low-latency ViT models for diverse mobile devices remains a big challenge. In this work, we propose ElasticViT, a two-stage NAS approach that trains a high-quality ViT supernet over a very large search space that supports a wide range of mobile devices, and then searches an optimal sub-network (subnet) for direct deployment. However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e.g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance. To address this challenge, we propose two novel sampling techniques: complexity-aware sampling and performance-aware sampling. Complexity-aware sampling limits the FLOPs difference among the subnets sampled across adjacent training steps, while covering different-sized subnets in the search space. Performance-aware sampling further selects subnets that have good accuracy, which can reduce gradient conflicts and improve supernet quality. Our discovered models, ElasticViT models, achieve top-1 accuracy from 67.2% to 80.0% on ImageNet from 60M to 800M FLOPs without extra retraining, outperforming all prior CNNs and ViTs in terms of accuracy and latency. Our tiny and small models are also the first ViT models that surpass state-of-the-art CNNs with significantly lower latency on mobile devices. For instance, ElasticViT-S1 runs 2.62x faster than EfficientNet-B0 with 0.1% higher accuracy.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAUhb0woMIDMOEXSLDjuHbYUNoCUhEDUdfoxrlGllwnskOgb09amI6Ozo_0EXLHWV5qKdkDxB8354VgImeVEuya-K2HNDlzcM0jrYdgvTNTBt8QkX58jRgDTrSJ4IILn9QOkW5w9MPp7HbLkh5cckM4V0Ja4iNGutiNmzEmpG9D5zwum9kZTDfkyoJPePuvK9Lstk39ku3fn1_rp30Ga8UyXlUceg6V0oZzCbaAAkttueFCQVkYYKiUEUpLq8XaYiX7TkIvsZO6UJ1Ykfu_2wtuO0Z3hHhqz9jtBVv8AlR0VAY</recordid><startdate>20230316</startdate><enddate>20230316</enddate><creator>Tang, Chen</creator><creator>Zhang, Li Lyna</creator><creator>Jiang, Huiqiang</creator><creator>Xu, Jiahang</creator><creator>Cao, Ting</creator><creator>Zhang, Quanlu</creator><creator>Yang, Yuqing</creator><creator>Wang, Zhi</creator><creator>Yang, Mao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230316</creationdate><title>ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices</title><author>Tang, Chen ; Zhang, Li Lyna ; Jiang, Huiqiang ; Xu, Jiahang ; Cao, Ting ; Zhang, Quanlu ; Yang, Yuqing ; Wang, Zhi ; Yang, Mao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-1991ad1a978c115af2a2e48f1c137a42ca0e77c3785f836fe95db5ad5eb5827b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Tang, Chen</creatorcontrib><creatorcontrib>Zhang, Li Lyna</creatorcontrib><creatorcontrib>Jiang, Huiqiang</creatorcontrib><creatorcontrib>Xu, Jiahang</creatorcontrib><creatorcontrib>Cao, Ting</creatorcontrib><creatorcontrib>Zhang, Quanlu</creatorcontrib><creatorcontrib>Yang, Yuqing</creatorcontrib><creatorcontrib>Wang, Zhi</creatorcontrib><creatorcontrib>Yang, Mao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tang, Chen</au><au>Zhang, Li Lyna</au><au>Jiang, Huiqiang</au><au>Xu, Jiahang</au><au>Cao, Ting</au><au>Zhang, Quanlu</au><au>Yang, Yuqing</au><au>Wang, Zhi</au><au>Yang, Mao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices</atitle><date>2023-03-16</date><risdate>2023</risdate><abstract>Neural Architecture Search (NAS) has shown promising performance in the automatic design of vision transformers (ViT) exceeding 1G FLOPs. However, designing lightweight and low-latency ViT models for diverse mobile devices remains a big challenge. In this work, we propose ElasticViT, a two-stage NAS approach that trains a high-quality ViT supernet over a very large search space that supports a wide range of mobile devices, and then searches an optimal sub-network (subnet) for direct deployment. However, prior supernet training methods that rely on uniform sampling suffer from the gradient conflict issue: the sampled subnets can have vastly different model sizes (e.g., 50M vs. 2G FLOPs), leading to different optimization directions and inferior performance. To address this challenge, we propose two novel sampling techniques: complexity-aware sampling and performance-aware sampling. Complexity-aware sampling limits the FLOPs difference among the subnets sampled across adjacent training steps, while covering different-sized subnets in the search space. Performance-aware sampling further selects subnets that have good accuracy, which can reduce gradient conflicts and improve supernet quality. Our discovered models, ElasticViT models, achieve top-1 accuracy from 67.2% to 80.0% on ImageNet from 60M to 800M FLOPs without extra retraining, outperforming all prior CNNs and ViTs in terms of accuracy and latency. Our tiny and small models are also the first ViT models that surpass state-of-the-art CNNs with significantly lower latency on mobile devices. For instance, ElasticViT-S1 runs 2.62x faster than EfficientNet-B0 with 0.1% higher accuracy.</abstract><doi>10.48550/arxiv.2303.09730</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2303.09730
ispartof
issn
language eng
recordid cdi_arxiv_primary_2303_09730
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title ElasticViT: Conflict-aware Supernet Training for Deploying Fast Vision Transformer on Diverse Mobile Devices
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T07%3A12%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ElasticViT:%20Conflict-aware%20Supernet%20Training%20for%20Deploying%20Fast%20Vision%20Transformer%20on%20Diverse%20Mobile%20Devices&rft.au=Tang,%20Chen&rft.date=2023-03-16&rft_id=info:doi/10.48550/arxiv.2303.09730&rft_dat=%3Carxiv_GOX%3E2303_09730%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true