Symmetry Regularization and Saturating Nonlinearity for Robust Quantization

Robust quantization improves the tolerance of networks for various implementations, allowing reliable output in different bit-widths or fragmented low-precision arithmetic. In this work, we perform extensive analyses to identify the sources of quantization error and present three insights to robusti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Park, Sein, Jang, Yeongsang, Park, Eunhyeok
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Park, Sein
Jang, Yeongsang
Park, Eunhyeok
description Robust quantization improves the tolerance of networks for various implementations, allowing reliable output in different bit-widths or fragmented low-precision arithmetic. In this work, we perform extensive analyses to identify the sources of quantization error and present three insights to robustify a network against quantization: reduction of error propagation, range clamping for error minimization, and inherited robustness against quantization. Based on these insights, we propose two novel methods called symmetry regularization (SymReg) and saturating nonlinearity (SatNL). Applying the proposed methods during training can enhance the robustness of arbitrary neural networks against quantization on existing post-training quantization (PTQ) and quantization-aware training (QAT) algorithms and enables us to obtain a single weight flexible enough to maintain the output quality under various conditions. We conduct extensive studies on CIFAR and ImageNet datasets and validate the effectiveness of the proposed methods.
doi_str_mv 10.48550/arxiv.2208.00338
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2208_00338</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2208_00338</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-e073974e50c4641e7c637172179a2463bb4adba141c395990e3baa66ed77478a3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIIdO772ElW8RAWi7T66Tm4rS4mDXAcRvp5SuhqNRmekw9iNFKW2dS3uMH2Hr7KqhC2FUMpestfNPAyU08zXtJ96TOEHcxgjx9jxDeYpHWvc87cx9iHScc8z342Jr0c_HTL_mDDmM3PFLnbYH-j6nAu2fXzYLp-L1fvTy_J-VaABW5AA5UBTLVpttCRojQIJlQSHlTbKe42dR6llq1ztnCDlEY2hDkCDRbVgt_-3J5vmM4UB09z8WTUnK_ULPAhISA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Symmetry Regularization and Saturating Nonlinearity for Robust Quantization</title><source>arXiv.org</source><creator>Park, Sein ; Jang, Yeongsang ; Park, Eunhyeok</creator><creatorcontrib>Park, Sein ; Jang, Yeongsang ; Park, Eunhyeok</creatorcontrib><description>Robust quantization improves the tolerance of networks for various implementations, allowing reliable output in different bit-widths or fragmented low-precision arithmetic. In this work, we perform extensive analyses to identify the sources of quantization error and present three insights to robustify a network against quantization: reduction of error propagation, range clamping for error minimization, and inherited robustness against quantization. Based on these insights, we propose two novel methods called symmetry regularization (SymReg) and saturating nonlinearity (SatNL). Applying the proposed methods during training can enhance the robustness of arbitrary neural networks against quantization on existing post-training quantization (PTQ) and quantization-aware training (QAT) algorithms and enables us to obtain a single weight flexible enough to maintain the output quality under various conditions. We conduct extensive studies on CIFAR and ImageNet datasets and validate the effectiveness of the proposed methods.</description><identifier>DOI: 10.48550/arxiv.2208.00338</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2022-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2208.00338$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2208.00338$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Park, Sein</creatorcontrib><creatorcontrib>Jang, Yeongsang</creatorcontrib><creatorcontrib>Park, Eunhyeok</creatorcontrib><title>Symmetry Regularization and Saturating Nonlinearity for Robust Quantization</title><description>Robust quantization improves the tolerance of networks for various implementations, allowing reliable output in different bit-widths or fragmented low-precision arithmetic. In this work, we perform extensive analyses to identify the sources of quantization error and present three insights to robustify a network against quantization: reduction of error propagation, range clamping for error minimization, and inherited robustness against quantization. Based on these insights, we propose two novel methods called symmetry regularization (SymReg) and saturating nonlinearity (SatNL). Applying the proposed methods during training can enhance the robustness of arbitrary neural networks against quantization on existing post-training quantization (PTQ) and quantization-aware training (QAT) algorithms and enables us to obtain a single weight flexible enough to maintain the output quality under various conditions. We conduct extensive studies on CIFAR and ImageNet datasets and validate the effectiveness of the proposed methods.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIIdO772ElW8RAWi7T66Tm4rS4mDXAcRvp5SuhqNRmekw9iNFKW2dS3uMH2Hr7KqhC2FUMpestfNPAyU08zXtJ96TOEHcxgjx9jxDeYpHWvc87cx9iHScc8z342Jr0c_HTL_mDDmM3PFLnbYH-j6nAu2fXzYLp-L1fvTy_J-VaABW5AA5UBTLVpttCRojQIJlQSHlTbKe42dR6llq1ztnCDlEY2hDkCDRbVgt_-3J5vmM4UB09z8WTUnK_ULPAhISA</recordid><startdate>20220730</startdate><enddate>20220730</enddate><creator>Park, Sein</creator><creator>Jang, Yeongsang</creator><creator>Park, Eunhyeok</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220730</creationdate><title>Symmetry Regularization and Saturating Nonlinearity for Robust Quantization</title><author>Park, Sein ; Jang, Yeongsang ; Park, Eunhyeok</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-e073974e50c4641e7c637172179a2463bb4adba141c395990e3baa66ed77478a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Park, Sein</creatorcontrib><creatorcontrib>Jang, Yeongsang</creatorcontrib><creatorcontrib>Park, Eunhyeok</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Park, Sein</au><au>Jang, Yeongsang</au><au>Park, Eunhyeok</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Symmetry Regularization and Saturating Nonlinearity for Robust Quantization</atitle><date>2022-07-30</date><risdate>2022</risdate><abstract>Robust quantization improves the tolerance of networks for various implementations, allowing reliable output in different bit-widths or fragmented low-precision arithmetic. In this work, we perform extensive analyses to identify the sources of quantization error and present three insights to robustify a network against quantization: reduction of error propagation, range clamping for error minimization, and inherited robustness against quantization. Based on these insights, we propose two novel methods called symmetry regularization (SymReg) and saturating nonlinearity (SatNL). Applying the proposed methods during training can enhance the robustness of arbitrary neural networks against quantization on existing post-training quantization (PTQ) and quantization-aware training (QAT) algorithms and enables us to obtain a single weight flexible enough to maintain the output quality under various conditions. We conduct extensive studies on CIFAR and ImageNet datasets and validate the effectiveness of the proposed methods.</abstract><doi>10.48550/arxiv.2208.00338</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2208.00338
ispartof
issn
language eng
recordid cdi_arxiv_primary_2208_00338
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Symmetry Regularization and Saturating Nonlinearity for Robust Quantization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T04%3A42%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Symmetry%20Regularization%20and%20Saturating%20Nonlinearity%20for%20Robust%20Quantization&rft.au=Park,%20Sein&rft.date=2022-07-30&rft_id=info:doi/10.48550/arxiv.2208.00338&rft_dat=%3Carxiv_GOX%3E2208_00338%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true