When Neurons Fail

We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase. We give tight bounds on the number of neurons that can fail without harming the result of a computation. To determine our bounds,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2017-06
Hauptverfasser: El Mahdi El Mhamdi, Guerraoui, Rachid
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator El Mahdi El Mhamdi
Guerraoui, Rachid
description We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase. We give tight bounds on the number of neurons that can fail without harming the result of a computation. To determine our bounds, we leverage the fact that neural activation functions are Lipschitz-continuous. Our bound is on a quantity, we call the \textit{Forward Error Propagation}, capturing how much error is propagated by a neural network when a given number of components is failing, computing this quantity only requires looking at the topology of the network, while experimentally assessing the robustness of a network requires the costly experiment of looking at all the possible inputs and testing all the possible configurations of the network corresponding to different failure situations, facing a discouraging combinatorial explosion. We distinguish the case of neurons that can fail and stop their activity (crashed neurons) from the case of neurons that can fail by transmitting arbitrary values (Byzantine neurons). Interestingly, as we show in the paper, our bound can easily be extended to the case where synapses can fail. We show how our bound can be leveraged to quantify the effect of memory cost reduction on the accuracy of a neural network, to estimate the amount of information any neuron needs from its preceding layer, enabling thereby a boosting scheme that prevents neurons from waiting for unnecessary signals. We finally discuss the trade-off between neural networks robustness and learning cost.
doi_str_mv 10.48550/arxiv.1706.08884
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1706_08884</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2075715682</sourcerecordid><originalsourceid>FETCH-LOGICAL-a522-8c40725968025dd1a74a07edbd2440cafeb862b151179beee8caba477d9c4f4d3</originalsourceid><addsrcrecordid>eNotzs9LwzAYxvEgCI65w46eHHhuffPm19ujDOeEoZfBjiFpUuyY7Uys6H9v3Tw9ly8PH8ZuOJSSlIJ7l77br5Ib0CUQkbxgExSCFyQRr9gs5z0AoDaolJiw-e4tdouXOKS-y4uVaw_X7LJxhxxn_ztl29XjdrkuNq9Pz8uHTeEUYkG1hPGi0gSoQuDOSAcmBh9QSqhdEz1p9FxxbiofY6TaeSeNCVUtGxnElN2eb09ee0ztu0s_9s9tT-6xuDsXx9R_DDF_2n0_pG40WQSjDFeaUPwCzjhDUA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2075715682</pqid></control><display><type>article</type><title>When Neurons Fail</title><source>arXiv.org</source><source>Free E- Journals</source><creator>El Mahdi El Mhamdi ; Guerraoui, Rachid</creator><creatorcontrib>El Mahdi El Mhamdi ; Guerraoui, Rachid</creatorcontrib><description>We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase. We give tight bounds on the number of neurons that can fail without harming the result of a computation. To determine our bounds, we leverage the fact that neural activation functions are Lipschitz-continuous. Our bound is on a quantity, we call the \textit{Forward Error Propagation}, capturing how much error is propagated by a neural network when a given number of components is failing, computing this quantity only requires looking at the topology of the network, while experimentally assessing the robustness of a network requires the costly experiment of looking at all the possible inputs and testing all the possible configurations of the network corresponding to different failure situations, facing a discouraging combinatorial explosion. We distinguish the case of neurons that can fail and stop their activity (crashed neurons) from the case of neurons that can fail by transmitting arbitrary values (Byzantine neurons). Interestingly, as we show in the paper, our bound can easily be extended to the case where synapses can fail. We show how our bound can be leveraged to quantify the effect of memory cost reduction on the accuracy of a neural network, to estimate the amount of information any neuron needs from its preceding layer, enabling thereby a boosting scheme that prevents neurons from waiting for unnecessary signals. We finally discuss the trade-off between neural networks robustness and learning cost.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1706.08884</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Combinatorial analysis ; Computer networks ; Computer Science - Distributed, Parallel, and Cluster Computing ; Computer Science - Neural and Evolutionary Computing ; Continuity (mathematics) ; Neural networks ; Neurons ; Quantitative Biology - Neurons and Cognition ; Robustness ; Statistics - Machine Learning ; Synapses ; Topology</subject><ispartof>arXiv.org, 2017-06</ispartof><rights>2017. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/IPDPS.2017.66$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.1706.08884$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>El Mahdi El Mhamdi</creatorcontrib><creatorcontrib>Guerraoui, Rachid</creatorcontrib><title>When Neurons Fail</title><title>arXiv.org</title><description>We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase. We give tight bounds on the number of neurons that can fail without harming the result of a computation. To determine our bounds, we leverage the fact that neural activation functions are Lipschitz-continuous. Our bound is on a quantity, we call the \textit{Forward Error Propagation}, capturing how much error is propagated by a neural network when a given number of components is failing, computing this quantity only requires looking at the topology of the network, while experimentally assessing the robustness of a network requires the costly experiment of looking at all the possible inputs and testing all the possible configurations of the network corresponding to different failure situations, facing a discouraging combinatorial explosion. We distinguish the case of neurons that can fail and stop their activity (crashed neurons) from the case of neurons that can fail by transmitting arbitrary values (Byzantine neurons). Interestingly, as we show in the paper, our bound can easily be extended to the case where synapses can fail. We show how our bound can be leveraged to quantify the effect of memory cost reduction on the accuracy of a neural network, to estimate the amount of information any neuron needs from its preceding layer, enabling thereby a boosting scheme that prevents neurons from waiting for unnecessary signals. We finally discuss the trade-off between neural networks robustness and learning cost.</description><subject>Combinatorial analysis</subject><subject>Computer networks</subject><subject>Computer Science - Distributed, Parallel, and Cluster Computing</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><subject>Continuity (mathematics)</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>Quantitative Biology - Neurons and Cognition</subject><subject>Robustness</subject><subject>Statistics - Machine Learning</subject><subject>Synapses</subject><subject>Topology</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotzs9LwzAYxvEgCI65w46eHHhuffPm19ujDOeEoZfBjiFpUuyY7Uys6H9v3Tw9ly8PH8ZuOJSSlIJ7l77br5Ib0CUQkbxgExSCFyQRr9gs5z0AoDaolJiw-e4tdouXOKS-y4uVaw_X7LJxhxxn_ztl29XjdrkuNq9Pz8uHTeEUYkG1hPGi0gSoQuDOSAcmBh9QSqhdEz1p9FxxbiofY6TaeSeNCVUtGxnElN2eb09ee0ztu0s_9s9tT-6xuDsXx9R_DDF_2n0_pG40WQSjDFeaUPwCzjhDUA</recordid><startdate>20170627</startdate><enddate>20170627</enddate><creator>El Mahdi El Mhamdi</creator><creator>Guerraoui, Rachid</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>ALC</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20170627</creationdate><title>When Neurons Fail</title><author>El Mahdi El Mhamdi ; Guerraoui, Rachid</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a522-8c40725968025dd1a74a07edbd2440cafeb862b151179beee8caba477d9c4f4d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Combinatorial analysis</topic><topic>Computer networks</topic><topic>Computer Science - Distributed, Parallel, and Cluster Computing</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><topic>Continuity (mathematics)</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>Quantitative Biology - Neurons and Cognition</topic><topic>Robustness</topic><topic>Statistics - Machine Learning</topic><topic>Synapses</topic><topic>Topology</topic><toplevel>online_resources</toplevel><creatorcontrib>El Mahdi El Mhamdi</creatorcontrib><creatorcontrib>Guerraoui, Rachid</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv Quantitative Biology</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>El Mahdi El Mhamdi</au><au>Guerraoui, Rachid</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>When Neurons Fail</atitle><jtitle>arXiv.org</jtitle><date>2017-06-27</date><risdate>2017</risdate><eissn>2331-8422</eissn><abstract>We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase. We give tight bounds on the number of neurons that can fail without harming the result of a computation. To determine our bounds, we leverage the fact that neural activation functions are Lipschitz-continuous. Our bound is on a quantity, we call the \textit{Forward Error Propagation}, capturing how much error is propagated by a neural network when a given number of components is failing, computing this quantity only requires looking at the topology of the network, while experimentally assessing the robustness of a network requires the costly experiment of looking at all the possible inputs and testing all the possible configurations of the network corresponding to different failure situations, facing a discouraging combinatorial explosion. We distinguish the case of neurons that can fail and stop their activity (crashed neurons) from the case of neurons that can fail by transmitting arbitrary values (Byzantine neurons). Interestingly, as we show in the paper, our bound can easily be extended to the case where synapses can fail. We show how our bound can be leveraged to quantify the effect of memory cost reduction on the accuracy of a neural network, to estimate the amount of information any neuron needs from its preceding layer, enabling thereby a boosting scheme that prevents neurons from waiting for unnecessary signals. We finally discuss the trade-off between neural networks robustness and learning cost.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1706.08884</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2017-06
issn 2331-8422
language eng
recordid cdi_arxiv_primary_1706_08884
source arXiv.org; Free E- Journals
subjects Combinatorial analysis
Computer networks
Computer Science - Distributed, Parallel, and Cluster Computing
Computer Science - Neural and Evolutionary Computing
Continuity (mathematics)
Neural networks
Neurons
Quantitative Biology - Neurons and Cognition
Robustness
Statistics - Machine Learning
Synapses
Topology
title When Neurons Fail
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T12%3A11%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=When%20Neurons%20Fail&rft.jtitle=arXiv.org&rft.au=El%20Mahdi%20El%20Mhamdi&rft.date=2017-06-27&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1706.08884&rft_dat=%3Cproquest_arxiv%3E2075715682%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2075715682&rft_id=info:pmid/&rfr_iscdi=true