Inf-CP: A Reliable Channel Pruning based on Channel Influence

One of the most effective methods of channel pruning is to trim on the basis of the importance of each neuron. However, measuring the importance of each neuron is an NP-hard problem. Previous works have proposed to trim by considering the statistics of a single layer or a plurality of successive lay...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lai, Bilan, Xiang, Haoran, Shen, Furao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lai, Bilan
Xiang, Haoran
Shen, Furao
description One of the most effective methods of channel pruning is to trim on the basis of the importance of each neuron. However, measuring the importance of each neuron is an NP-hard problem. Previous works have proposed to trim by considering the statistics of a single layer or a plurality of successive layers of neurons. These works cannot eliminate the influence of different data on the model in the reconstruction error, and currently, there is no work to prove that the absolute values of the parameters can be directly used as the basis for judging the importance of the weights. A more reasonable approach is to eliminate the difference between batch data that accurately measures the weight of influence. In this paper, we propose to use ensemble learning to train a model for different batches of data and use the influence function (a classic technique from robust statistics) to learn the algorithm to track the model's prediction and return its training parameter gradient, so that we can determine the responsibility for each parameter, which we call "influence", in the prediction process. In addition, we theoretically prove that the back-propagation of the deep network is a first-order Taylor approximation of the influence function of the weights. We perform extensive experiments to prove that pruning based on the influence function using the idea of ensemble learning will be much more effective than just focusing on error reconstruction. Experiments on CIFAR shows that the influence pruning achieves the state-of-the-art result.
doi_str_mv 10.48550/arxiv.2112.02521
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2112_02521</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2112_02521</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-e45af327732dc709dd3689e0c59322e7bd3bac088e61f0f46c3ec3db5bd398243</originalsourceid><addsrcrecordid>eNo9j81KAzEUhbNxIdUHcNW8wIzJvZNJRnBRBn8KBUvpfrhJbtqBGGVKRd_eWsXVWXycw_mEuNGqbpwx6pamz_GjBq2hVmBAX4r7ZUlVv76TC7nhPJLPLPs9lcJZrqdjGctOejpwlG_lH5w6-cgl8JW4SJQPfP2XM7F9fNj2z9Xq5WnZL1YVtVZX3BhKCNYixGBVFyO2rmMVTIcAbH1ET0E5x61OKjVtQA4YvTmBzkGDMzH_nT3_H96n8ZWmr-HHYzh74Df0pkEd</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Inf-CP: A Reliable Channel Pruning based on Channel Influence</title><source>arXiv.org</source><creator>Lai, Bilan ; Xiang, Haoran ; Shen, Furao</creator><creatorcontrib>Lai, Bilan ; Xiang, Haoran ; Shen, Furao</creatorcontrib><description>One of the most effective methods of channel pruning is to trim on the basis of the importance of each neuron. However, measuring the importance of each neuron is an NP-hard problem. Previous works have proposed to trim by considering the statistics of a single layer or a plurality of successive layers of neurons. These works cannot eliminate the influence of different data on the model in the reconstruction error, and currently, there is no work to prove that the absolute values of the parameters can be directly used as the basis for judging the importance of the weights. A more reasonable approach is to eliminate the difference between batch data that accurately measures the weight of influence. In this paper, we propose to use ensemble learning to train a model for different batches of data and use the influence function (a classic technique from robust statistics) to learn the algorithm to track the model's prediction and return its training parameter gradient, so that we can determine the responsibility for each parameter, which we call "influence", in the prediction process. In addition, we theoretically prove that the back-propagation of the deep network is a first-order Taylor approximation of the influence function of the weights. We perform extensive experiments to prove that pruning based on the influence function using the idea of ensemble learning will be much more effective than just focusing on error reconstruction. Experiments on CIFAR shows that the influence pruning achieves the state-of-the-art result.</description><identifier>DOI: 10.48550/arxiv.2112.02521</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2021-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2112.02521$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2112.02521$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lai, Bilan</creatorcontrib><creatorcontrib>Xiang, Haoran</creatorcontrib><creatorcontrib>Shen, Furao</creatorcontrib><title>Inf-CP: A Reliable Channel Pruning based on Channel Influence</title><description>One of the most effective methods of channel pruning is to trim on the basis of the importance of each neuron. However, measuring the importance of each neuron is an NP-hard problem. Previous works have proposed to trim by considering the statistics of a single layer or a plurality of successive layers of neurons. These works cannot eliminate the influence of different data on the model in the reconstruction error, and currently, there is no work to prove that the absolute values of the parameters can be directly used as the basis for judging the importance of the weights. A more reasonable approach is to eliminate the difference between batch data that accurately measures the weight of influence. In this paper, we propose to use ensemble learning to train a model for different batches of data and use the influence function (a classic technique from robust statistics) to learn the algorithm to track the model's prediction and return its training parameter gradient, so that we can determine the responsibility for each parameter, which we call "influence", in the prediction process. In addition, we theoretically prove that the back-propagation of the deep network is a first-order Taylor approximation of the influence function of the weights. We perform extensive experiments to prove that pruning based on the influence function using the idea of ensemble learning will be much more effective than just focusing on error reconstruction. Experiments on CIFAR shows that the influence pruning achieves the state-of-the-art result.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo9j81KAzEUhbNxIdUHcNW8wIzJvZNJRnBRBn8KBUvpfrhJbtqBGGVKRd_eWsXVWXycw_mEuNGqbpwx6pamz_GjBq2hVmBAX4r7ZUlVv76TC7nhPJLPLPs9lcJZrqdjGctOejpwlG_lH5w6-cgl8JW4SJQPfP2XM7F9fNj2z9Xq5WnZL1YVtVZX3BhKCNYixGBVFyO2rmMVTIcAbH1ET0E5x61OKjVtQA4YvTmBzkGDMzH_nT3_H96n8ZWmr-HHYzh74Df0pkEd</recordid><startdate>20211205</startdate><enddate>20211205</enddate><creator>Lai, Bilan</creator><creator>Xiang, Haoran</creator><creator>Shen, Furao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211205</creationdate><title>Inf-CP: A Reliable Channel Pruning based on Channel Influence</title><author>Lai, Bilan ; Xiang, Haoran ; Shen, Furao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-e45af327732dc709dd3689e0c59322e7bd3bac088e61f0f46c3ec3db5bd398243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lai, Bilan</creatorcontrib><creatorcontrib>Xiang, Haoran</creatorcontrib><creatorcontrib>Shen, Furao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lai, Bilan</au><au>Xiang, Haoran</au><au>Shen, Furao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Inf-CP: A Reliable Channel Pruning based on Channel Influence</atitle><date>2021-12-05</date><risdate>2021</risdate><abstract>One of the most effective methods of channel pruning is to trim on the basis of the importance of each neuron. However, measuring the importance of each neuron is an NP-hard problem. Previous works have proposed to trim by considering the statistics of a single layer or a plurality of successive layers of neurons. These works cannot eliminate the influence of different data on the model in the reconstruction error, and currently, there is no work to prove that the absolute values of the parameters can be directly used as the basis for judging the importance of the weights. A more reasonable approach is to eliminate the difference between batch data that accurately measures the weight of influence. In this paper, we propose to use ensemble learning to train a model for different batches of data and use the influence function (a classic technique from robust statistics) to learn the algorithm to track the model's prediction and return its training parameter gradient, so that we can determine the responsibility for each parameter, which we call "influence", in the prediction process. In addition, we theoretically prove that the back-propagation of the deep network is a first-order Taylor approximation of the influence function of the weights. We perform extensive experiments to prove that pruning based on the influence function using the idea of ensemble learning will be much more effective than just focusing on error reconstruction. Experiments on CIFAR shows that the influence pruning achieves the state-of-the-art result.</abstract><doi>10.48550/arxiv.2112.02521</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2112.02521
ispartof
issn
language eng
recordid cdi_arxiv_primary_2112_02521
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Inf-CP: A Reliable Channel Pruning based on Channel Influence
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T01%3A14%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Inf-CP:%20A%20Reliable%20Channel%20Pruning%20based%20on%20Channel%20Influence&rft.au=Lai,%20Bilan&rft.date=2021-12-05&rft_id=info:doi/10.48550/arxiv.2112.02521&rft_dat=%3Carxiv_GOX%3E2112_02521%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true