Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning
Pattern Recognition 2023 Neural network quantization is a very promising solution in the field of model compression, but its resulting accuracy highly depends on a training/fine-tuning process and requires the original data. This not only brings heavy computation and time costs but also is not condu...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Chen, Jun Bai, Shipeng Huang, Tianxin Wang, Mengmeng Tian, Guanzhong Liu, Yong |
description | Pattern Recognition 2023 Neural network quantization is a very promising solution in the field of
model compression, but its resulting accuracy highly depends on a
training/fine-tuning process and requires the original data. This not only
brings heavy computation and time costs but also is not conducive to privacy
and sensitive information protection. Therefore, a few recent works are
starting to focus on data-free quantization. However, data-free quantization
does not perform well while dealing with ultra-low precision quantization.
Although researchers utilize generative methods of synthetic data to address
this problem partially, data synthesis needs to take a lot of computation and
time. In this paper, we propose a data-free mixed-precision compensation
(DF-MPC) method to recover the performance of an ultra-low precision quantized
model without any data and fine-tuning process. By assuming the quantized error
caused by a low-precision quantized layer can be restored via the
reconstruction of a high-precision quantized layer, we mathematically formulate
the reconstruction loss between the pre-trained full-precision model and its
layer-wise mixed-precision quantized model. Based on our formulation, we
theoretically deduce the closed-form solution by minimizing the reconstruction
loss of the feature maps. Since DF-MPC does not require any original/synthetic
data, it is a more efficient method to approximate the full-precision model.
Experimentally, our DF-MPC is able to achieve higher accuracy for an ultra-low
precision quantized model compared to the recent methods without any data and
fine-tuning process. |
doi_str_mv | 10.48550/arxiv.2307.00498 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2307_00498</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2307_00498</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-ab17a43e22c1fa9de119305589b1683a8fe93ad020557a5b19578a71f1698ada3</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb3pArV8ACvyAw52HMf2EqUEEEWAlH00aSYwUutUjlMKXw99rK50dHWkw9iNFGlutRZ3EA60TzMlTCpE7uwVe1lCBF4FxORjAh_pFyINPtkTJK90wI6_B1zTeGTlsN2hH8-Hb4pfwxSTijzyevLkPxds1sNmxOvLzlldPdTlE1-9PT6X9ysOhbEcWmkgV5hla9mD61BKp4TW1rWysApsj05BJ7J_ZkC30mljwcheFs5CB2rObs_aU02zC7SF8NMcq5pTlfoDpZxHnw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning</title><source>arXiv.org</source><creator>Chen, Jun ; Bai, Shipeng ; Huang, Tianxin ; Wang, Mengmeng ; Tian, Guanzhong ; Liu, Yong</creator><creatorcontrib>Chen, Jun ; Bai, Shipeng ; Huang, Tianxin ; Wang, Mengmeng ; Tian, Guanzhong ; Liu, Yong</creatorcontrib><description>Pattern Recognition 2023 Neural network quantization is a very promising solution in the field of
model compression, but its resulting accuracy highly depends on a
training/fine-tuning process and requires the original data. This not only
brings heavy computation and time costs but also is not conducive to privacy
and sensitive information protection. Therefore, a few recent works are
starting to focus on data-free quantization. However, data-free quantization
does not perform well while dealing with ultra-low precision quantization.
Although researchers utilize generative methods of synthetic data to address
this problem partially, data synthesis needs to take a lot of computation and
time. In this paper, we propose a data-free mixed-precision compensation
(DF-MPC) method to recover the performance of an ultra-low precision quantized
model without any data and fine-tuning process. By assuming the quantized error
caused by a low-precision quantized layer can be restored via the
reconstruction of a high-precision quantized layer, we mathematically formulate
the reconstruction loss between the pre-trained full-precision model and its
layer-wise mixed-precision quantized model. Based on our formulation, we
theoretically deduce the closed-form solution by minimizing the reconstruction
loss of the feature maps. Since DF-MPC does not require any original/synthetic
data, it is a more efficient method to approximate the full-precision model.
Experimentally, our DF-MPC is able to achieve higher accuracy for an ultra-low
precision quantized model compared to the recent methods without any data and
fine-tuning process.</description><identifier>DOI: 10.48550/arxiv.2307.00498</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2307.00498$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2307.00498$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Jun</creatorcontrib><creatorcontrib>Bai, Shipeng</creatorcontrib><creatorcontrib>Huang, Tianxin</creatorcontrib><creatorcontrib>Wang, Mengmeng</creatorcontrib><creatorcontrib>Tian, Guanzhong</creatorcontrib><creatorcontrib>Liu, Yong</creatorcontrib><title>Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning</title><description>Pattern Recognition 2023 Neural network quantization is a very promising solution in the field of
model compression, but its resulting accuracy highly depends on a
training/fine-tuning process and requires the original data. This not only
brings heavy computation and time costs but also is not conducive to privacy
and sensitive information protection. Therefore, a few recent works are
starting to focus on data-free quantization. However, data-free quantization
does not perform well while dealing with ultra-low precision quantization.
Although researchers utilize generative methods of synthetic data to address
this problem partially, data synthesis needs to take a lot of computation and
time. In this paper, we propose a data-free mixed-precision compensation
(DF-MPC) method to recover the performance of an ultra-low precision quantized
model without any data and fine-tuning process. By assuming the quantized error
caused by a low-precision quantized layer can be restored via the
reconstruction of a high-precision quantized layer, we mathematically formulate
the reconstruction loss between the pre-trained full-precision model and its
layer-wise mixed-precision quantized model. Based on our formulation, we
theoretically deduce the closed-form solution by minimizing the reconstruction
loss of the feature maps. Since DF-MPC does not require any original/synthetic
data, it is a more efficient method to approximate the full-precision model.
Experimentally, our DF-MPC is able to achieve higher accuracy for an ultra-low
precision quantized model compared to the recent methods without any data and
fine-tuning process.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb3pArV8ACvyAw52HMf2EqUEEEWAlH00aSYwUutUjlMKXw99rK50dHWkw9iNFGlutRZ3EA60TzMlTCpE7uwVe1lCBF4FxORjAh_pFyINPtkTJK90wI6_B1zTeGTlsN2hH8-Hb4pfwxSTijzyevLkPxds1sNmxOvLzlldPdTlE1-9PT6X9ysOhbEcWmkgV5hla9mD61BKp4TW1rWysApsj05BJ7J_ZkC30mljwcheFs5CB2rObs_aU02zC7SF8NMcq5pTlfoDpZxHnw</recordid><startdate>20230702</startdate><enddate>20230702</enddate><creator>Chen, Jun</creator><creator>Bai, Shipeng</creator><creator>Huang, Tianxin</creator><creator>Wang, Mengmeng</creator><creator>Tian, Guanzhong</creator><creator>Liu, Yong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230702</creationdate><title>Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning</title><author>Chen, Jun ; Bai, Shipeng ; Huang, Tianxin ; Wang, Mengmeng ; Tian, Guanzhong ; Liu, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-ab17a43e22c1fa9de119305589b1683a8fe93ad020557a5b19578a71f1698ada3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Jun</creatorcontrib><creatorcontrib>Bai, Shipeng</creatorcontrib><creatorcontrib>Huang, Tianxin</creatorcontrib><creatorcontrib>Wang, Mengmeng</creatorcontrib><creatorcontrib>Tian, Guanzhong</creatorcontrib><creatorcontrib>Liu, Yong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Jun</au><au>Bai, Shipeng</au><au>Huang, Tianxin</au><au>Wang, Mengmeng</au><au>Tian, Guanzhong</au><au>Liu, Yong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning</atitle><date>2023-07-02</date><risdate>2023</risdate><abstract>Pattern Recognition 2023 Neural network quantization is a very promising solution in the field of
model compression, but its resulting accuracy highly depends on a
training/fine-tuning process and requires the original data. This not only
brings heavy computation and time costs but also is not conducive to privacy
and sensitive information protection. Therefore, a few recent works are
starting to focus on data-free quantization. However, data-free quantization
does not perform well while dealing with ultra-low precision quantization.
Although researchers utilize generative methods of synthetic data to address
this problem partially, data synthesis needs to take a lot of computation and
time. In this paper, we propose a data-free mixed-precision compensation
(DF-MPC) method to recover the performance of an ultra-low precision quantized
model without any data and fine-tuning process. By assuming the quantized error
caused by a low-precision quantized layer can be restored via the
reconstruction of a high-precision quantized layer, we mathematically formulate
the reconstruction loss between the pre-trained full-precision model and its
layer-wise mixed-precision quantized model. Based on our formulation, we
theoretically deduce the closed-form solution by minimizing the reconstruction
loss of the feature maps. Since DF-MPC does not require any original/synthetic
data, it is a more efficient method to approximate the full-precision model.
Experimentally, our DF-MPC is able to achieve higher accuracy for an ultra-low
precision quantized model compared to the recent methods without any data and
fine-tuning process.</abstract><doi>10.48550/arxiv.2307.00498</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2307.00498 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2307_00498 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Data-Free Quantization via Mixed-Precision Compensation without Fine-Tuning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T23%3A39%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Data-Free%20Quantization%20via%20Mixed-Precision%20Compensation%20without%20Fine-Tuning&rft.au=Chen,%20Jun&rft.date=2023-07-02&rft_id=info:doi/10.48550/arxiv.2307.00498&rft_dat=%3Carxiv_GOX%3E2307_00498%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |