FrostNet: Towards Quantization-Aware Network Architecture Search
INT8 quantization has become one of the standard techniques for deploying convolutional neural networks (CNNs) on edge devices to reduce the memory and computational resource usages. By analyzing quantized performances of existing mobile-target network architectures, we can raise an issue regarding...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Kim, Taehoon Yoo, YoungJoon Yang, Jihoon |
description | INT8 quantization has become one of the standard techniques for deploying
convolutional neural networks (CNNs) on edge devices to reduce the memory and
computational resource usages. By analyzing quantized performances of existing
mobile-target network architectures, we can raise an issue regarding the
importance of network architecture for optimal INT8 quantization. In this
paper, we present a new network architecture search (NAS) procedure to find a
network that guarantees both full-precision (FLOAT32) and quantized (INT8)
performances. We first propose critical but straightforward optimization method
which enables quantization-aware training (QAT) : floating-point statistic
assisting (StatAssist) and stochastic gradient boosting (GradBoost). By
integrating the gradient-based NAS with StatAssist and GradBoost, we discovered
a quantization-efficient network building block, Frost bottleneck. Furthermore,
we used Frost bottleneck as the building block for hardware-aware NAS to obtain
quantization-efficient networks, FrostNets, which show improved quantization
performances compared to other mobile-target networks while maintaining
competitive FLOAT32 performance. Our FrostNets achieve higher recognition
accuracy than existing CNNs with comparable latency when quantized, due to
higher latency reduction rate (average 65%). |
doi_str_mv | 10.48550/arxiv.2006.09679 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2006_09679</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2006_09679</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-c83a56c114a01233e3fcc040ddade6d504ad3b8334d115f89e5bf5a83426ed643</originalsourceid><addsrcrecordid>eNotj8FOwzAQRH3hgAofwKn5gYR11jYOJ6KKAlIFQs092tobYQENclwKfD1u4TSaGWlGT4gLCZWyWsMlxa_wWdUApoLGXDWn4mYZxyk9crouunFP0U_F8462KfxQCuO2bHPGRe73Y3wt2uheQmKXdjlcM2V7Jk4Gepv4_F9nolvedov7cvV097BoVyXln9JZJG2clIpA1oiMg3OgwHvybLwGRR43FlF5KfVgG9abQZNFVRv2RuFMzP9mjwj9RwzvFL_7A0p_RMFfTSNELA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>FrostNet: Towards Quantization-Aware Network Architecture Search</title><source>arXiv.org</source><creator>Kim, Taehoon ; Yoo, YoungJoon ; Yang, Jihoon</creator><creatorcontrib>Kim, Taehoon ; Yoo, YoungJoon ; Yang, Jihoon</creatorcontrib><description>INT8 quantization has become one of the standard techniques for deploying
convolutional neural networks (CNNs) on edge devices to reduce the memory and
computational resource usages. By analyzing quantized performances of existing
mobile-target network architectures, we can raise an issue regarding the
importance of network architecture for optimal INT8 quantization. In this
paper, we present a new network architecture search (NAS) procedure to find a
network that guarantees both full-precision (FLOAT32) and quantized (INT8)
performances. We first propose critical but straightforward optimization method
which enables quantization-aware training (QAT) : floating-point statistic
assisting (StatAssist) and stochastic gradient boosting (GradBoost). By
integrating the gradient-based NAS with StatAssist and GradBoost, we discovered
a quantization-efficient network building block, Frost bottleneck. Furthermore,
we used Frost bottleneck as the building block for hardware-aware NAS to obtain
quantization-efficient networks, FrostNets, which show improved quantization
performances compared to other mobile-target networks while maintaining
competitive FLOAT32 performance. Our FrostNets achieve higher recognition
accuracy than existing CNNs with comparable latency when quantized, due to
higher latency reduction rate (average 65%).</description><identifier>DOI: 10.48550/arxiv.2006.09679</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,782,887</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2006.09679$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2006.09679$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Taehoon</creatorcontrib><creatorcontrib>Yoo, YoungJoon</creatorcontrib><creatorcontrib>Yang, Jihoon</creatorcontrib><title>FrostNet: Towards Quantization-Aware Network Architecture Search</title><description>INT8 quantization has become one of the standard techniques for deploying
convolutional neural networks (CNNs) on edge devices to reduce the memory and
computational resource usages. By analyzing quantized performances of existing
mobile-target network architectures, we can raise an issue regarding the
importance of network architecture for optimal INT8 quantization. In this
paper, we present a new network architecture search (NAS) procedure to find a
network that guarantees both full-precision (FLOAT32) and quantized (INT8)
performances. We first propose critical but straightforward optimization method
which enables quantization-aware training (QAT) : floating-point statistic
assisting (StatAssist) and stochastic gradient boosting (GradBoost). By
integrating the gradient-based NAS with StatAssist and GradBoost, we discovered
a quantization-efficient network building block, Frost bottleneck. Furthermore,
we used Frost bottleneck as the building block for hardware-aware NAS to obtain
quantization-efficient networks, FrostNets, which show improved quantization
performances compared to other mobile-target networks while maintaining
competitive FLOAT32 performance. Our FrostNets achieve higher recognition
accuracy than existing CNNs with comparable latency when quantized, due to
higher latency reduction rate (average 65%).</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOwzAQRH3hgAofwKn5gYR11jYOJ6KKAlIFQs092tobYQENclwKfD1u4TSaGWlGT4gLCZWyWsMlxa_wWdUApoLGXDWn4mYZxyk9crouunFP0U_F8462KfxQCuO2bHPGRe73Y3wt2uheQmKXdjlcM2V7Jk4Gepv4_F9nolvedov7cvV097BoVyXln9JZJG2clIpA1oiMg3OgwHvybLwGRR43FlF5KfVgG9abQZNFVRv2RuFMzP9mjwj9RwzvFL_7A0p_RMFfTSNELA</recordid><startdate>20200617</startdate><enddate>20200617</enddate><creator>Kim, Taehoon</creator><creator>Yoo, YoungJoon</creator><creator>Yang, Jihoon</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200617</creationdate><title>FrostNet: Towards Quantization-Aware Network Architecture Search</title><author>Kim, Taehoon ; Yoo, YoungJoon ; Yang, Jihoon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-c83a56c114a01233e3fcc040ddade6d504ad3b8334d115f89e5bf5a83426ed643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Taehoon</creatorcontrib><creatorcontrib>Yoo, YoungJoon</creatorcontrib><creatorcontrib>Yang, Jihoon</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Taehoon</au><au>Yoo, YoungJoon</au><au>Yang, Jihoon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FrostNet: Towards Quantization-Aware Network Architecture Search</atitle><date>2020-06-17</date><risdate>2020</risdate><abstract>INT8 quantization has become one of the standard techniques for deploying
convolutional neural networks (CNNs) on edge devices to reduce the memory and
computational resource usages. By analyzing quantized performances of existing
mobile-target network architectures, we can raise an issue regarding the
importance of network architecture for optimal INT8 quantization. In this
paper, we present a new network architecture search (NAS) procedure to find a
network that guarantees both full-precision (FLOAT32) and quantized (INT8)
performances. We first propose critical but straightforward optimization method
which enables quantization-aware training (QAT) : floating-point statistic
assisting (StatAssist) and stochastic gradient boosting (GradBoost). By
integrating the gradient-based NAS with StatAssist and GradBoost, we discovered
a quantization-efficient network building block, Frost bottleneck. Furthermore,
we used Frost bottleneck as the building block for hardware-aware NAS to obtain
quantization-efficient networks, FrostNets, which show improved quantization
performances compared to other mobile-target networks while maintaining
competitive FLOAT32 performance. Our FrostNets achieve higher recognition
accuracy than existing CNNs with comparable latency when quantized, due to
higher latency reduction rate (average 65%).</abstract><doi>10.48550/arxiv.2006.09679</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2006.09679 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2006_09679 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning Statistics - Machine Learning |
title | FrostNet: Towards Quantization-Aware Network Architecture Search |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-06T03%3A15%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FrostNet:%20Towards%20Quantization-Aware%20Network%20Architecture%20Search&rft.au=Kim,%20Taehoon&rft.date=2020-06-17&rft_id=info:doi/10.48550/arxiv.2006.09679&rft_dat=%3Carxiv_GOX%3E2006_09679%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |