DNN compression by ADMM-based joint pruning

The success of deep neural networks (DNNs) has motivated pursuit of both computationally and memory efficient models for applications in resource-constrained systems such as embedded devices. In line with this trend, network pruning methods reducing redundancy in over-parameterized models are being...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Knowledge-based systems 2022-03, Vol.239, p.107988, Article 107988
Hauptverfasser: Lee, Geonseok, Lee, Kichun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page 107988
container_title Knowledge-based systems
container_volume 239
creator Lee, Geonseok
Lee, Kichun
description The success of deep neural networks (DNNs) has motivated pursuit of both computationally and memory efficient models for applications in resource-constrained systems such as embedded devices. In line with this trend, network pruning methods reducing redundancy in over-parameterized models are being studied actively. Previous works on this research have demonstrated the ability to learn a compact network by imposing sparsity constraints on the parameters, but most of them have difficulty not only in identifying both connections and neurons to be pruned, but also in converging to optimal solutions. We propose a systematic DNN compression method where weights and network architectures are jointly optimized. We solve the joint problem using alternating direction method of multipliers (ADMM), a powerful technique capable of handling non-convex separable programming. Additionally, we provide a holistic pruning approach, an integrated form of our method, for automatically pruning networks without specific layer-wise hyper-parameters. To verify our work, we deployed the proposed method to a variety of state-of-the-art convolutional neural networks (CNNs) on three image classification benchmark datasets: MNIST, CIFAR-10, and ImageNet. Results show that the proposed pruning method effectively compresses the network parameters and reduces the computation cost while preserving prediction accuracy.
doi_str_mv 10.1016/j.knosys.2021.107988
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2638772645</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0950705121011047</els_id><sourcerecordid>2638772645</sourcerecordid><originalsourceid>FETCH-LOGICAL-c334t-9eb883cb4721c3e28eb7a8d595a8c4de78f88a0329ceb877dc3aca0cad12905b3</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKv_wMOCR9k6SXab5CKU1i9o60XPIZukktUma7IV9t-bsp49DQzv8w7zIHSNYYYBz-_a2acPaUgzAgTnFROcn6AJ5oyUrAJxiiYgaigZ1PgcXaTUAgAhmE_Q7Wq7LXTYd9Gm5IIvmqFYrDabslHJmqINzvdFFw_e-Y9LdLZTX8le_c0pen98eFs-l-vXp5flYl1qSqu-FLbhnOqmYgRragm3DVPc1KJWXFfGMr7jXAElQuckY0ZTpRVoZTARUDd0im7G3i6G74NNvWzDIfp8UpI5zQSZV3VOVWNKx5BStDvZRbdXcZAY5FGLbOWoRR61yFFLxu5HzOYPfpyNMmlnvbbGRat7aYL7v-AXj81sIw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2638772645</pqid></control><display><type>article</type><title>DNN compression by ADMM-based joint pruning</title><source>Elsevier ScienceDirect Journals</source><creator>Lee, Geonseok ; Lee, Kichun</creator><creatorcontrib>Lee, Geonseok ; Lee, Kichun</creatorcontrib><description>The success of deep neural networks (DNNs) has motivated pursuit of both computationally and memory efficient models for applications in resource-constrained systems such as embedded devices. In line with this trend, network pruning methods reducing redundancy in over-parameterized models are being studied actively. Previous works on this research have demonstrated the ability to learn a compact network by imposing sparsity constraints on the parameters, but most of them have difficulty not only in identifying both connections and neurons to be pruned, but also in converging to optimal solutions. We propose a systematic DNN compression method where weights and network architectures are jointly optimized. We solve the joint problem using alternating direction method of multipliers (ADMM), a powerful technique capable of handling non-convex separable programming. Additionally, we provide a holistic pruning approach, an integrated form of our method, for automatically pruning networks without specific layer-wise hyper-parameters. To verify our work, we deployed the proposed method to a variety of state-of-the-art convolutional neural networks (CNNs) on three image classification benchmark datasets: MNIST, CIFAR-10, and ImageNet. Results show that the proposed pruning method effectively compresses the network parameters and reduces the computation cost while preserving prediction accuracy.</description><identifier>ISSN: 0950-7051</identifier><identifier>EISSN: 1872-7409</identifier><identifier>DOI: 10.1016/j.knosys.2021.107988</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Alternative direction method of multipliers (ADMM) ; Artificial neural networks ; Computer architecture ; Constraints ; Electronic devices ; Image classification ; Mathematical models ; Neural network compression ; Neural networks ; Parameter identification ; Pruning ; Redundancy ; Structured pruning ; Unstructured pruning</subject><ispartof>Knowledge-based systems, 2022-03, Vol.239, p.107988, Article 107988</ispartof><rights>2021 Elsevier B.V.</rights><rights>Copyright Elsevier Science Ltd. Mar 5, 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c334t-9eb883cb4721c3e28eb7a8d595a8c4de78f88a0329ceb877dc3aca0cad12905b3</citedby><cites>FETCH-LOGICAL-c334t-9eb883cb4721c3e28eb7a8d595a8c4de78f88a0329ceb877dc3aca0cad12905b3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.sciencedirect.com/science/article/pii/S0950705121011047$$EHTML$$P50$$Gelsevier$$H</linktohtml><link.rule.ids>314,776,780,3537,27901,27902,65306</link.rule.ids></links><search><creatorcontrib>Lee, Geonseok</creatorcontrib><creatorcontrib>Lee, Kichun</creatorcontrib><title>DNN compression by ADMM-based joint pruning</title><title>Knowledge-based systems</title><description>The success of deep neural networks (DNNs) has motivated pursuit of both computationally and memory efficient models for applications in resource-constrained systems such as embedded devices. In line with this trend, network pruning methods reducing redundancy in over-parameterized models are being studied actively. Previous works on this research have demonstrated the ability to learn a compact network by imposing sparsity constraints on the parameters, but most of them have difficulty not only in identifying both connections and neurons to be pruned, but also in converging to optimal solutions. We propose a systematic DNN compression method where weights and network architectures are jointly optimized. We solve the joint problem using alternating direction method of multipliers (ADMM), a powerful technique capable of handling non-convex separable programming. Additionally, we provide a holistic pruning approach, an integrated form of our method, for automatically pruning networks without specific layer-wise hyper-parameters. To verify our work, we deployed the proposed method to a variety of state-of-the-art convolutional neural networks (CNNs) on three image classification benchmark datasets: MNIST, CIFAR-10, and ImageNet. Results show that the proposed pruning method effectively compresses the network parameters and reduces the computation cost while preserving prediction accuracy.</description><subject>Alternative direction method of multipliers (ADMM)</subject><subject>Artificial neural networks</subject><subject>Computer architecture</subject><subject>Constraints</subject><subject>Electronic devices</subject><subject>Image classification</subject><subject>Mathematical models</subject><subject>Neural network compression</subject><subject>Neural networks</subject><subject>Parameter identification</subject><subject>Pruning</subject><subject>Redundancy</subject><subject>Structured pruning</subject><subject>Unstructured pruning</subject><issn>0950-7051</issn><issn>1872-7409</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWKv_wMOCR9k6SXab5CKU1i9o60XPIZukktUma7IV9t-bsp49DQzv8w7zIHSNYYYBz-_a2acPaUgzAgTnFROcn6AJ5oyUrAJxiiYgaigZ1PgcXaTUAgAhmE_Q7Wq7LXTYd9Gm5IIvmqFYrDabslHJmqINzvdFFw_e-Y9LdLZTX8le_c0pen98eFs-l-vXp5flYl1qSqu-FLbhnOqmYgRragm3DVPc1KJWXFfGMr7jXAElQuckY0ZTpRVoZTARUDd0im7G3i6G74NNvWzDIfp8UpI5zQSZV3VOVWNKx5BStDvZRbdXcZAY5FGLbOWoRR61yFFLxu5HzOYPfpyNMmlnvbbGRat7aYL7v-AXj81sIw</recordid><startdate>20220305</startdate><enddate>20220305</enddate><creator>Lee, Geonseok</creator><creator>Lee, Kichun</creator><general>Elsevier B.V</general><general>Elsevier Science Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>E3H</scope><scope>F2A</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20220305</creationdate><title>DNN compression by ADMM-based joint pruning</title><author>Lee, Geonseok ; Lee, Kichun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c334t-9eb883cb4721c3e28eb7a8d595a8c4de78f88a0329ceb877dc3aca0cad12905b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Alternative direction method of multipliers (ADMM)</topic><topic>Artificial neural networks</topic><topic>Computer architecture</topic><topic>Constraints</topic><topic>Electronic devices</topic><topic>Image classification</topic><topic>Mathematical models</topic><topic>Neural network compression</topic><topic>Neural networks</topic><topic>Parameter identification</topic><topic>Pruning</topic><topic>Redundancy</topic><topic>Structured pruning</topic><topic>Unstructured pruning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Geonseok</creatorcontrib><creatorcontrib>Lee, Kichun</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>Library &amp; Information Sciences Abstracts (LISA)</collection><collection>Library &amp; Information Science Abstracts (LISA)</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Knowledge-based systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lee, Geonseok</au><au>Lee, Kichun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DNN compression by ADMM-based joint pruning</atitle><jtitle>Knowledge-based systems</jtitle><date>2022-03-05</date><risdate>2022</risdate><volume>239</volume><spage>107988</spage><pages>107988-</pages><artnum>107988</artnum><issn>0950-7051</issn><eissn>1872-7409</eissn><abstract>The success of deep neural networks (DNNs) has motivated pursuit of both computationally and memory efficient models for applications in resource-constrained systems such as embedded devices. In line with this trend, network pruning methods reducing redundancy in over-parameterized models are being studied actively. Previous works on this research have demonstrated the ability to learn a compact network by imposing sparsity constraints on the parameters, but most of them have difficulty not only in identifying both connections and neurons to be pruned, but also in converging to optimal solutions. We propose a systematic DNN compression method where weights and network architectures are jointly optimized. We solve the joint problem using alternating direction method of multipliers (ADMM), a powerful technique capable of handling non-convex separable programming. Additionally, we provide a holistic pruning approach, an integrated form of our method, for automatically pruning networks without specific layer-wise hyper-parameters. To verify our work, we deployed the proposed method to a variety of state-of-the-art convolutional neural networks (CNNs) on three image classification benchmark datasets: MNIST, CIFAR-10, and ImageNet. Results show that the proposed pruning method effectively compresses the network parameters and reduces the computation cost while preserving prediction accuracy.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.knosys.2021.107988</doi></addata></record>
fulltext fulltext
identifier ISSN: 0950-7051
ispartof Knowledge-based systems, 2022-03, Vol.239, p.107988, Article 107988
issn 0950-7051
1872-7409
language eng
recordid cdi_proquest_journals_2638772645
source Elsevier ScienceDirect Journals
subjects Alternative direction method of multipliers (ADMM)
Artificial neural networks
Computer architecture
Constraints
Electronic devices
Image classification
Mathematical models
Neural network compression
Neural networks
Parameter identification
Pruning
Redundancy
Structured pruning
Unstructured pruning
title DNN compression by ADMM-based joint pruning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T14%3A13%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DNN%20compression%20by%20ADMM-based%20joint%20pruning&rft.jtitle=Knowledge-based%20systems&rft.au=Lee,%20Geonseok&rft.date=2022-03-05&rft.volume=239&rft.spage=107988&rft.pages=107988-&rft.artnum=107988&rft.issn=0950-7051&rft.eissn=1872-7409&rft_id=info:doi/10.1016/j.knosys.2021.107988&rft_dat=%3Cproquest_cross%3E2638772645%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2638772645&rft_id=info:pmid/&rft_els_id=S0950705121011047&rfr_iscdi=true