Study of Sensitivity to Weight Perturbation for Convolution Neural Network

Exploring underlying properties of a neural network contributes to pursuing its internal behavior and functionality. For convolution neural networks (CNNs), a sensitivity measure to weight perturbation is introduced in this paper to reflect the extent of the network output variation, which could eva...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.93898-93908
Hauptverfasser: Xiang, Lin, Zeng, Xiaoqin, Niu, Yuhu, Liu, Yanjun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 93908
container_issue
container_start_page 93898
container_title IEEE access
container_volume 7
creator Xiang, Lin
Zeng, Xiaoqin
Niu, Yuhu
Liu, Yanjun
description Exploring underlying properties of a neural network contributes to pursuing its internal behavior and functionality. For convolution neural networks (CNNs), a sensitivity measure to weight perturbation is introduced in this paper to reflect the extent of the network output variation, which could evaluate the effect of the weights on the network. The sensitivity is defined as the mathematical expectation of absolute output variation due to weight perturbation with respect to all possible inputs. Assuming that the conditional distribution of input obeys the normal, the sensitivity is iteratively computed layer to layer until the entire network. Without loss of generality, the paper proposes an approximate algorithm to compute a theoretical sensitivity, which is actually a function of mapping between the network's output variation and its weight perturbation. The experimental results demonstrate the coincidence of the computed theoretical sensitivity with the simulated actual output variation of the network. Thus a criterion can be established to evaluate the influence of weights on CNNs' output.
doi_str_mv 10.1109/ACCESS.2019.2926768
format Article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8755296</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8755296</ieee_id><doaj_id>oai_doaj_org_article_0765e2b6035c4735b2dc09c71ea33c8a</doaj_id><sourcerecordid>2455638299</sourcerecordid><originalsourceid>FETCH-LOGICAL-c358t-405d99fb5cb4f68e474a768a3233e970b9453f9478e0e1ae549a1ccb12419d293</originalsourceid><addsrcrecordid>eNpNUU1LAzEQDaKg1P4CLwueW_OdzVGWqpWiQhWPIZud1dTa1Gy20n_vtluKc3kzw7z3Bh5CVwSPCcH65rYoJvP5mGKix1RTqWR-gi4okXrEBJOn__pzNGyaBe4q71ZCXaDHeWqrbRbqbA6rxie_8WmbpZC9g__4TNkLxNTG0iYfVlkdYlaE1SYs2_38BG20yw7Sb4hfl-istssGhgccoLe7yWvxMJo930-L29nIMZGnEcei0rouhSt5LXPgitvuZcsoY6AVLjUXrNZc5YCBWBBcW-JcSSgnuqKaDdC0162CXZh19N82bk2w3uwXIX4YG5N3SzBYSQG0lJgJxxUTJa0c1k4RsIy5znOArnutdQw_LTTJLEIbV937hnIhJMup3jmy_srF0DQR6qMrwWaXgekzMLsMzCGDjnXVszwAHBm5EoJqyf4ADbSBsQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2455638299</pqid></control><display><type>article</type><title>Study of Sensitivity to Weight Perturbation for Convolution Neural Network</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Xiang, Lin ; Zeng, Xiaoqin ; Niu, Yuhu ; Liu, Yanjun</creator><creatorcontrib>Xiang, Lin ; Zeng, Xiaoqin ; Niu, Yuhu ; Liu, Yanjun</creatorcontrib><description>Exploring underlying properties of a neural network contributes to pursuing its internal behavior and functionality. For convolution neural networks (CNNs), a sensitivity measure to weight perturbation is introduced in this paper to reflect the extent of the network output variation, which could evaluate the effect of the weights on the network. The sensitivity is defined as the mathematical expectation of absolute output variation due to weight perturbation with respect to all possible inputs. Assuming that the conditional distribution of input obeys the normal, the sensitivity is iteratively computed layer to layer until the entire network. Without loss of generality, the paper proposes an approximate algorithm to compute a theoretical sensitivity, which is actually a function of mapping between the network's output variation and its weight perturbation. The experimental results demonstrate the coincidence of the computed theoretical sensitivity with the simulated actual output variation of the network. Thus a criterion can be established to evaluate the influence of weights on CNNs' output.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2019.2926768</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Artificial neural networks ; Computation ; Convolution ; Convolutional neural network ; Evaluation ; Feature extraction ; Kernel ; Neural networks ; Neurons ; Perturbation ; Perturbation methods ; Sensitivity ; Weight ; weight perturbation</subject><ispartof>IEEE access, 2019, Vol.7, p.93898-93908</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c358t-405d99fb5cb4f68e474a768a3233e970b9453f9478e0e1ae549a1ccb12419d293</cites><orcidid>0000-0003-1230-3757</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8755296$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Xiang, Lin</creatorcontrib><creatorcontrib>Zeng, Xiaoqin</creatorcontrib><creatorcontrib>Niu, Yuhu</creatorcontrib><creatorcontrib>Liu, Yanjun</creatorcontrib><title>Study of Sensitivity to Weight Perturbation for Convolution Neural Network</title><title>IEEE access</title><addtitle>Access</addtitle><description>Exploring underlying properties of a neural network contributes to pursuing its internal behavior and functionality. For convolution neural networks (CNNs), a sensitivity measure to weight perturbation is introduced in this paper to reflect the extent of the network output variation, which could evaluate the effect of the weights on the network. The sensitivity is defined as the mathematical expectation of absolute output variation due to weight perturbation with respect to all possible inputs. Assuming that the conditional distribution of input obeys the normal, the sensitivity is iteratively computed layer to layer until the entire network. Without loss of generality, the paper proposes an approximate algorithm to compute a theoretical sensitivity, which is actually a function of mapping between the network's output variation and its weight perturbation. The experimental results demonstrate the coincidence of the computed theoretical sensitivity with the simulated actual output variation of the network. Thus a criterion can be established to evaluate the influence of weights on CNNs' output.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Computation</subject><subject>Convolution</subject><subject>Convolutional neural network</subject><subject>Evaluation</subject><subject>Feature extraction</subject><subject>Kernel</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>Perturbation</subject><subject>Perturbation methods</subject><subject>Sensitivity</subject><subject>Weight</subject><subject>weight perturbation</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1LAzEQDaKg1P4CLwueW_OdzVGWqpWiQhWPIZud1dTa1Gy20n_vtluKc3kzw7z3Bh5CVwSPCcH65rYoJvP5mGKix1RTqWR-gi4okXrEBJOn__pzNGyaBe4q71ZCXaDHeWqrbRbqbA6rxie_8WmbpZC9g__4TNkLxNTG0iYfVlkdYlaE1SYs2_38BG20yw7Sb4hfl-istssGhgccoLe7yWvxMJo930-L29nIMZGnEcei0rouhSt5LXPgitvuZcsoY6AVLjUXrNZc5YCBWBBcW-JcSSgnuqKaDdC0162CXZh19N82bk2w3uwXIX4YG5N3SzBYSQG0lJgJxxUTJa0c1k4RsIy5znOArnutdQw_LTTJLEIbV937hnIhJMup3jmy_srF0DQR6qMrwWaXgekzMLsMzCGDjnXVszwAHBm5EoJqyf4ADbSBsQ</recordid><startdate>2019</startdate><enddate>2019</enddate><creator>Xiang, Lin</creator><creator>Zeng, Xiaoqin</creator><creator>Niu, Yuhu</creator><creator>Liu, Yanjun</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-1230-3757</orcidid></search><sort><creationdate>2019</creationdate><title>Study of Sensitivity to Weight Perturbation for Convolution Neural Network</title><author>Xiang, Lin ; Zeng, Xiaoqin ; Niu, Yuhu ; Liu, Yanjun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c358t-405d99fb5cb4f68e474a768a3233e970b9453f9478e0e1ae549a1ccb12419d293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Computation</topic><topic>Convolution</topic><topic>Convolutional neural network</topic><topic>Evaluation</topic><topic>Feature extraction</topic><topic>Kernel</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>Perturbation</topic><topic>Perturbation methods</topic><topic>Sensitivity</topic><topic>Weight</topic><topic>weight perturbation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xiang, Lin</creatorcontrib><creatorcontrib>Zeng, Xiaoqin</creatorcontrib><creatorcontrib>Niu, Yuhu</creatorcontrib><creatorcontrib>Liu, Yanjun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xiang, Lin</au><au>Zeng, Xiaoqin</au><au>Niu, Yuhu</au><au>Liu, Yanjun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Study of Sensitivity to Weight Perturbation for Convolution Neural Network</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2019</date><risdate>2019</risdate><volume>7</volume><spage>93898</spage><epage>93908</epage><pages>93898-93908</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Exploring underlying properties of a neural network contributes to pursuing its internal behavior and functionality. For convolution neural networks (CNNs), a sensitivity measure to weight perturbation is introduced in this paper to reflect the extent of the network output variation, which could evaluate the effect of the weights on the network. The sensitivity is defined as the mathematical expectation of absolute output variation due to weight perturbation with respect to all possible inputs. Assuming that the conditional distribution of input obeys the normal, the sensitivity is iteratively computed layer to layer until the entire network. Without loss of generality, the paper proposes an approximate algorithm to compute a theoretical sensitivity, which is actually a function of mapping between the network's output variation and its weight perturbation. The experimental results demonstrate the coincidence of the computed theoretical sensitivity with the simulated actual output variation of the network. Thus a criterion can be established to evaluate the influence of weights on CNNs' output.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2019.2926768</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0003-1230-3757</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2019, Vol.7, p.93898-93908
issn 2169-3536
2169-3536
language eng
recordid cdi_ieee_primary_8755296
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals
subjects Algorithms
Artificial neural networks
Computation
Convolution
Convolutional neural network
Evaluation
Feature extraction
Kernel
Neural networks
Neurons
Perturbation
Perturbation methods
Sensitivity
Weight
weight perturbation
title Study of Sensitivity to Weight Perturbation for Convolution Neural Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T16%3A18%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Study%20of%20Sensitivity%20to%20Weight%20Perturbation%20for%20Convolution%20Neural%20Network&rft.jtitle=IEEE%20access&rft.au=Xiang,%20Lin&rft.date=2019&rft.volume=7&rft.spage=93898&rft.epage=93908&rft.pages=93898-93908&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2019.2926768&rft_dat=%3Cproquest_ieee_%3E2455638299%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2455638299&rft_id=info:pmid/&rft_ieee_id=8755296&rft_doaj_id=oai_doaj_org_article_0765e2b6035c4735b2dc09c71ea33c8a&rfr_iscdi=true