Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning
Structured pruning and quantization are promising approaches for reducing the inference time and memory footprint of neural networks. However, most existing methods require the original training dataset to fine-tune the model. This not only brings heavy resource consumption but also is not possible...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Bai, Shipeng Chen, Jun Shen, Xintian Qian, Yixuan Liu, Yong |
description | Structured pruning and quantization are promising approaches for reducing the
inference time and memory footprint of neural networks. However, most existing
methods require the original training dataset to fine-tune the model. This not
only brings heavy resource consumption but also is not possible for
applications with sensitive or proprietary data due to privacy and security
concerns. Therefore, a few data-free methods are proposed to address this
problem, but they perform data-free pruning and quantization separately, which
does not explore the complementarity of pruning and quantization. In this
paper, we propose a novel framework named Unified Data-Free Compression(UDFC),
which performs pruning and quantization simultaneously without any data and
fine-tuning process. Specifically, UDFC starts with the assumption that the
partial information of a damaged(e.g., pruned or quantized) channel can be
preserved by a linear combination of other channels, and then derives the
reconstruction form from the assumption to restore the information loss due to
compression. Finally, we formulate the reconstruction error between the
original network and its compressed network, and theoretically deduce the
closed-form solution. We evaluate the UDFC on the large-scale image
classification task and obtain significant improvements over various network
architectures and compression methods. For example, we achieve a 20.54%
accuracy improvement on ImageNet dataset compared to SOTA method with 30%
pruning ratio and 6-bit quantization on ResNet-34. |
doi_str_mv | 10.48550/arxiv.2308.07209 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2308_07209</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2308_07209</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-eebd077604216a867031480145ba7ec8cc2b6e4b0b7724b773c8aa5ce27eddf33</originalsourceid><addsrcrecordid>eNotj81OxCAUhdm4MKMP4EpegEqBAuPOVKtmJlGTum4u5VZJHDqh1L-nd6xuzlmcLyf5CDkreaFsVfELSJ_hvRCS24IbwdfHZPMcwxDQ02vIwJqESOtxt084TWGMl_QxzTHEFwrR06cZYg7fkA8L_Qj5dZwzbUJE1i7QCTka4G3C0_9ekba5aes7tn24va-vtgy0WTNE57kxmitRarDacFkqy0tVOTDY274XTqNy3Bkj1CFkbwGqHoVB7wcpV-T873ax6fYp7CB9db9W3WIlfwCE8Efp</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning</title><source>arXiv.org</source><creator>Bai, Shipeng ; Chen, Jun ; Shen, Xintian ; Qian, Yixuan ; Liu, Yong</creator><creatorcontrib>Bai, Shipeng ; Chen, Jun ; Shen, Xintian ; Qian, Yixuan ; Liu, Yong</creatorcontrib><description>Structured pruning and quantization are promising approaches for reducing the
inference time and memory footprint of neural networks. However, most existing
methods require the original training dataset to fine-tune the model. This not
only brings heavy resource consumption but also is not possible for
applications with sensitive or proprietary data due to privacy and security
concerns. Therefore, a few data-free methods are proposed to address this
problem, but they perform data-free pruning and quantization separately, which
does not explore the complementarity of pruning and quantization. In this
paper, we propose a novel framework named Unified Data-Free Compression(UDFC),
which performs pruning and quantization simultaneously without any data and
fine-tuning process. Specifically, UDFC starts with the assumption that the
partial information of a damaged(e.g., pruned or quantized) channel can be
preserved by a linear combination of other channels, and then derives the
reconstruction form from the assumption to restore the information loss due to
compression. Finally, we formulate the reconstruction error between the
original network and its compressed network, and theoretically deduce the
closed-form solution. We evaluate the UDFC on the large-scale image
classification task and obtain significant improvements over various network
architectures and compression methods. For example, we achieve a 20.54%
accuracy improvement on ImageNet dataset compared to SOTA method with 30%
pruning ratio and 6-bit quantization on ResNet-34.</description><identifier>DOI: 10.48550/arxiv.2308.07209</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2023-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2308.07209$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2308.07209$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bai, Shipeng</creatorcontrib><creatorcontrib>Chen, Jun</creatorcontrib><creatorcontrib>Shen, Xintian</creatorcontrib><creatorcontrib>Qian, Yixuan</creatorcontrib><creatorcontrib>Liu, Yong</creatorcontrib><title>Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning</title><description>Structured pruning and quantization are promising approaches for reducing the
inference time and memory footprint of neural networks. However, most existing
methods require the original training dataset to fine-tune the model. This not
only brings heavy resource consumption but also is not possible for
applications with sensitive or proprietary data due to privacy and security
concerns. Therefore, a few data-free methods are proposed to address this
problem, but they perform data-free pruning and quantization separately, which
does not explore the complementarity of pruning and quantization. In this
paper, we propose a novel framework named Unified Data-Free Compression(UDFC),
which performs pruning and quantization simultaneously without any data and
fine-tuning process. Specifically, UDFC starts with the assumption that the
partial information of a damaged(e.g., pruned or quantized) channel can be
preserved by a linear combination of other channels, and then derives the
reconstruction form from the assumption to restore the information loss due to
compression. Finally, we formulate the reconstruction error between the
original network and its compressed network, and theoretically deduce the
closed-form solution. We evaluate the UDFC on the large-scale image
classification task and obtain significant improvements over various network
architectures and compression methods. For example, we achieve a 20.54%
accuracy improvement on ImageNet dataset compared to SOTA method with 30%
pruning ratio and 6-bit quantization on ResNet-34.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OxCAUhdm4MKMP4EpegEqBAuPOVKtmJlGTum4u5VZJHDqh1L-nd6xuzlmcLyf5CDkreaFsVfELSJ_hvRCS24IbwdfHZPMcwxDQ02vIwJqESOtxt084TWGMl_QxzTHEFwrR06cZYg7fkA8L_Qj5dZwzbUJE1i7QCTka4G3C0_9ekba5aes7tn24va-vtgy0WTNE57kxmitRarDacFkqy0tVOTDY274XTqNy3Bkj1CFkbwGqHoVB7wcpV-T873ax6fYp7CB9db9W3WIlfwCE8Efp</recordid><startdate>20230814</startdate><enddate>20230814</enddate><creator>Bai, Shipeng</creator><creator>Chen, Jun</creator><creator>Shen, Xintian</creator><creator>Qian, Yixuan</creator><creator>Liu, Yong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230814</creationdate><title>Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning</title><author>Bai, Shipeng ; Chen, Jun ; Shen, Xintian ; Qian, Yixuan ; Liu, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-eebd077604216a867031480145ba7ec8cc2b6e4b0b7724b773c8aa5ce27eddf33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bai, Shipeng</creatorcontrib><creatorcontrib>Chen, Jun</creatorcontrib><creatorcontrib>Shen, Xintian</creatorcontrib><creatorcontrib>Qian, Yixuan</creatorcontrib><creatorcontrib>Liu, Yong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bai, Shipeng</au><au>Chen, Jun</au><au>Shen, Xintian</au><au>Qian, Yixuan</au><au>Liu, Yong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning</atitle><date>2023-08-14</date><risdate>2023</risdate><abstract>Structured pruning and quantization are promising approaches for reducing the
inference time and memory footprint of neural networks. However, most existing
methods require the original training dataset to fine-tune the model. This not
only brings heavy resource consumption but also is not possible for
applications with sensitive or proprietary data due to privacy and security
concerns. Therefore, a few data-free methods are proposed to address this
problem, but they perform data-free pruning and quantization separately, which
does not explore the complementarity of pruning and quantization. In this
paper, we propose a novel framework named Unified Data-Free Compression(UDFC),
which performs pruning and quantization simultaneously without any data and
fine-tuning process. Specifically, UDFC starts with the assumption that the
partial information of a damaged(e.g., pruned or quantized) channel can be
preserved by a linear combination of other channels, and then derives the
reconstruction form from the assumption to restore the information loss due to
compression. Finally, we formulate the reconstruction error between the
original network and its compressed network, and theoretically deduce the
closed-form solution. We evaluate the UDFC on the large-scale image
classification task and obtain significant improvements over various network
architectures and compression methods. For example, we achieve a 20.54%
accuracy improvement on ImageNet dataset compared to SOTA method with 30%
pruning ratio and 6-bit quantization on ResNet-34.</abstract><doi>10.48550/arxiv.2308.07209</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2308.07209 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2308_07209 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Learning |
title | Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T19%3A08%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Unified%20Data-Free%20Compression:%20Pruning%20and%20Quantization%20without%20Fine-Tuning&rft.au=Bai,%20Shipeng&rft.date=2023-08-14&rft_id=info:doi/10.48550/arxiv.2308.07209&rft_dat=%3Carxiv_GOX%3E2308_07209%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |