KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks

Pruning has become a promising technique used to compress and accelerate neural networks. Existing methods are mainly evaluated on spare labeling applications. However, dense labeling applications are those closer to real world problems that require real-time processing on resource-constrained mobil...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-01
Hauptverfasser: Po-Hsiang Yu, Wu, Sih-Sian, Liang-Gee, Chen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Po-Hsiang Yu
Wu, Sih-Sian
Liang-Gee, Chen
description Pruning has become a promising technique used to compress and accelerate neural networks. Existing methods are mainly evaluated on spare labeling applications. However, dense labeling applications are those closer to real world problems that require real-time processing on resource-constrained mobile devices. Pruning for dense labeling applications is still a largely unexplored field. The prevailing filter channel pruning method removes the entire filter channel. Accordingly, the interaction between each kernel in one filter channel is ignored. In this study, we proposed kernel cluster pruning (KCP) to prune dense labeling networks. We developed a clustering technique to identify the least representational kernels in each layer. By iteratively removing those kernels, the parameter that can better represent the entire network is preserved; thus, we achieve better accuracy with a decent model size and computation reduction. When evaluated on stereo matching and semantic segmentation neural networks, our method can reduce more than 70% of FLOPs with less than 1% of accuracy drop. Moreover, for ResNet-50 on ILSVRC-2012, our KCP can reduce more than 50% of FLOPs reduction with 0.13% Top-1 accuracy gain. Therefore, KCP achieves state-of-the-art pruning results.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2478894938</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2478894938</sourcerecordid><originalsourceid>FETCH-proquest_journals_24788949383</originalsourceid><addsrcrecordid>eNqNikEKwjAQAIMgWLR_CHguxCS1qQcvUREq0oP3UmEr1pDoboPft4IP8DQwMxOWSKVWmdFSzlhK1Ash5LqQea4Stq1sveEVoAfHrYs0APIao7_7G-8C8h14An5qr-C-6gwRWzdieAd80IJNu9YRpD_O2fKwv9hj9sTwikBD04eIfkyN1IUxpS6VUf9dHzqmN5U</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2478894938</pqid></control><display><type>article</type><title>KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks</title><source>Free E- Journals</source><creator>Po-Hsiang Yu ; Wu, Sih-Sian ; Liang-Gee, Chen</creator><creatorcontrib>Po-Hsiang Yu ; Wu, Sih-Sian ; Liang-Gee, Chen</creatorcontrib><description>Pruning has become a promising technique used to compress and accelerate neural networks. Existing methods are mainly evaluated on spare labeling applications. However, dense labeling applications are those closer to real world problems that require real-time processing on resource-constrained mobile devices. Pruning for dense labeling applications is still a largely unexplored field. The prevailing filter channel pruning method removes the entire filter channel. Accordingly, the interaction between each kernel in one filter channel is ignored. In this study, we proposed kernel cluster pruning (KCP) to prune dense labeling networks. We developed a clustering technique to identify the least representational kernels in each layer. By iteratively removing those kernels, the parameter that can better represent the entire network is preserved; thus, we achieve better accuracy with a decent model size and computation reduction. When evaluated on stereo matching and semantic segmentation neural networks, our method can reduce more than 70% of FLOPs with less than 1% of accuracy drop. Moreover, for ResNet-50 on ILSVRC-2012, our KCP can reduce more than 50% of FLOPs reduction with 0.13% Top-1 accuracy gain. Therefore, KCP achieves state-of-the-art pruning results.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Clustering ; Electronic devices ; Kernels ; Labeling ; Model accuracy ; Neural networks ; Pruning ; Reduction ; Semantic segmentation</subject><ispartof>arXiv.org, 2021-01</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Po-Hsiang Yu</creatorcontrib><creatorcontrib>Wu, Sih-Sian</creatorcontrib><creatorcontrib>Liang-Gee, Chen</creatorcontrib><title>KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks</title><title>arXiv.org</title><description>Pruning has become a promising technique used to compress and accelerate neural networks. Existing methods are mainly evaluated on spare labeling applications. However, dense labeling applications are those closer to real world problems that require real-time processing on resource-constrained mobile devices. Pruning for dense labeling applications is still a largely unexplored field. The prevailing filter channel pruning method removes the entire filter channel. Accordingly, the interaction between each kernel in one filter channel is ignored. In this study, we proposed kernel cluster pruning (KCP) to prune dense labeling networks. We developed a clustering technique to identify the least representational kernels in each layer. By iteratively removing those kernels, the parameter that can better represent the entire network is preserved; thus, we achieve better accuracy with a decent model size and computation reduction. When evaluated on stereo matching and semantic segmentation neural networks, our method can reduce more than 70% of FLOPs with less than 1% of accuracy drop. Moreover, for ResNet-50 on ILSVRC-2012, our KCP can reduce more than 50% of FLOPs reduction with 0.13% Top-1 accuracy gain. Therefore, KCP achieves state-of-the-art pruning results.</description><subject>Accuracy</subject><subject>Clustering</subject><subject>Electronic devices</subject><subject>Kernels</subject><subject>Labeling</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Pruning</subject><subject>Reduction</subject><subject>Semantic segmentation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNikEKwjAQAIMgWLR_CHguxCS1qQcvUREq0oP3UmEr1pDoboPft4IP8DQwMxOWSKVWmdFSzlhK1Ash5LqQea4Stq1sveEVoAfHrYs0APIao7_7G-8C8h14An5qr-C-6gwRWzdieAd80IJNu9YRpD_O2fKwv9hj9sTwikBD04eIfkyN1IUxpS6VUf9dHzqmN5U</recordid><startdate>20210117</startdate><enddate>20210117</enddate><creator>Po-Hsiang Yu</creator><creator>Wu, Sih-Sian</creator><creator>Liang-Gee, Chen</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210117</creationdate><title>KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks</title><author>Po-Hsiang Yu ; Wu, Sih-Sian ; Liang-Gee, Chen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24788949383</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Clustering</topic><topic>Electronic devices</topic><topic>Kernels</topic><topic>Labeling</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Pruning</topic><topic>Reduction</topic><topic>Semantic segmentation</topic><toplevel>online_resources</toplevel><creatorcontrib>Po-Hsiang Yu</creatorcontrib><creatorcontrib>Wu, Sih-Sian</creatorcontrib><creatorcontrib>Liang-Gee, Chen</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Po-Hsiang Yu</au><au>Wu, Sih-Sian</au><au>Liang-Gee, Chen</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks</atitle><jtitle>arXiv.org</jtitle><date>2021-01-17</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Pruning has become a promising technique used to compress and accelerate neural networks. Existing methods are mainly evaluated on spare labeling applications. However, dense labeling applications are those closer to real world problems that require real-time processing on resource-constrained mobile devices. Pruning for dense labeling applications is still a largely unexplored field. The prevailing filter channel pruning method removes the entire filter channel. Accordingly, the interaction between each kernel in one filter channel is ignored. In this study, we proposed kernel cluster pruning (KCP) to prune dense labeling networks. We developed a clustering technique to identify the least representational kernels in each layer. By iteratively removing those kernels, the parameter that can better represent the entire network is preserved; thus, we achieve better accuracy with a decent model size and computation reduction. When evaluated on stereo matching and semantic segmentation neural networks, our method can reduce more than 70% of FLOPs with less than 1% of accuracy drop. Moreover, for ResNet-50 on ILSVRC-2012, our KCP can reduce more than 50% of FLOPs reduction with 0.13% Top-1 accuracy gain. Therefore, KCP achieves state-of-the-art pruning results.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-01
issn 2331-8422
language eng
recordid cdi_proquest_journals_2478894938
source Free E- Journals
subjects Accuracy
Clustering
Electronic devices
Kernels
Labeling
Model accuracy
Neural networks
Pruning
Reduction
Semantic segmentation
title KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T22%3A51%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=KCP:%20Kernel%20Cluster%20Pruning%20for%20Dense%20Labeling%20Neural%20Networks&rft.jtitle=arXiv.org&rft.au=Po-Hsiang%20Yu&rft.date=2021-01-17&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2478894938%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2478894938&rft_id=info:pmid/&rfr_iscdi=true