Learned Step Size Quantization

Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantiz...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Esser, Steven K, McKinstry, Jeffrey L, Bablani, Deepika, Appuswamy, Rathinakumar, Modha, Dharmendra S
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Esser, Steven K
McKinstry, Jeffrey L
Bablani, Deepika
Appuswamy, Rathinakumar
Modha, Dharmendra S
description Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.
doi_str_mv 10.48550/arxiv.1902.08153
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1902_08153</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1902_08153</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1153-6961fe734464fb8739a85042d0aa010ac0de0919a737f9169f42c2fbe63379e33</originalsourceid><addsrcrecordid>eNotzssKwjAQQNFsXIj6AW60P9A66aRJsxTxBQWR6rqMdgIBrVKrqF_vc3V3lyNEX0Kk0iSBEdV3f4ukhTiCVCbYFoOMqa64DPKGz0Hunxysr1Q1_kmNP1Vd0XJ0uHDv347YzqabySLMVvPlZJyFJN-bUFstHRtUSiu3Sw1aShNQcQlEIIH2UDJYacmgcVZq61S8j92ONaKxjNgRw9_3KyzOtT9S_Sg-0uIrxRfzGDZu</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learned Step Size Quantization</title><source>arXiv.org</source><creator>Esser, Steven K ; McKinstry, Jeffrey L ; Bablani, Deepika ; Appuswamy, Rathinakumar ; Modha, Dharmendra S</creator><creatorcontrib>Esser, Steven K ; McKinstry, Jeffrey L ; Bablani, Deepika ; Appuswamy, Rathinakumar ; Modha, Dharmendra S</creatorcontrib><description>Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.</description><identifier>DOI: 10.48550/arxiv.1902.08153</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1153-6961fe734464fb8739a85042d0aa010ac0de0919a737f9169f42c2fbe63379e33</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1902.08153$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1902.08153$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Esser, Steven K</creatorcontrib><creatorcontrib>McKinstry, Jeffrey L</creatorcontrib><creatorcontrib>Bablani, Deepika</creatorcontrib><creatorcontrib>Appuswamy, Rathinakumar</creatorcontrib><creatorcontrib>Modha, Dharmendra S</creatorcontrib><title>Learned Step Size Quantization</title><description>Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzssKwjAQQNFsXIj6AW60P9A66aRJsxTxBQWR6rqMdgIBrVKrqF_vc3V3lyNEX0Kk0iSBEdV3f4ukhTiCVCbYFoOMqa64DPKGz0Hunxysr1Q1_kmNP1Vd0XJ0uHDv347YzqabySLMVvPlZJyFJN-bUFstHRtUSiu3Sw1aShNQcQlEIIH2UDJYacmgcVZq61S8j92ONaKxjNgRw9_3KyzOtT9S_Sg-0uIrxRfzGDZu</recordid><startdate>20190221</startdate><enddate>20190221</enddate><creator>Esser, Steven K</creator><creator>McKinstry, Jeffrey L</creator><creator>Bablani, Deepika</creator><creator>Appuswamy, Rathinakumar</creator><creator>Modha, Dharmendra S</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190221</creationdate><title>Learned Step Size Quantization</title><author>Esser, Steven K ; McKinstry, Jeffrey L ; Bablani, Deepika ; Appuswamy, Rathinakumar ; Modha, Dharmendra S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1153-6961fe734464fb8739a85042d0aa010ac0de0919a737f9169f42c2fbe63379e33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Esser, Steven K</creatorcontrib><creatorcontrib>McKinstry, Jeffrey L</creatorcontrib><creatorcontrib>Bablani, Deepika</creatorcontrib><creatorcontrib>Appuswamy, Rathinakumar</creatorcontrib><creatorcontrib>Modha, Dharmendra S</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Esser, Steven K</au><au>McKinstry, Jeffrey L</au><au>Bablani, Deepika</au><au>Appuswamy, Rathinakumar</au><au>Modha, Dharmendra S</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learned Step Size Quantization</atitle><date>2019-02-21</date><risdate>2019</risdate><abstract>Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves the highest accuracy to date on the ImageNet dataset when using models, from a variety of architectures, with weights and activations quantized to 2-, 3- or 4-bits of precision, and that can train 3-bit models that reach full precision baseline accuracy. Our approach builds upon existing methods for learning weights in quantized networks by improving how the quantizer itself is configured. Specifically, we introduce a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters. This approach works using different levels of precision as needed for a given system and requires only a simple modification of existing training code.</abstract><doi>10.48550/arxiv.1902.08153</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1902.08153
ispartof
issn
language eng
recordid cdi_arxiv_primary_1902_08153
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Learned Step Size Quantization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-26T08%3A53%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learned%20Step%20Size%20Quantization&rft.au=Esser,%20Steven%20K&rft.date=2019-02-21&rft_id=info:doi/10.48550/arxiv.1902.08153&rft_dat=%3Carxiv_GOX%3E1902_08153%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true