Continuous learning of spiking networks trained with local rules
Artificial neural networks (ANNs) experience catastrophic forgetting (CF) during sequential learning. In contrast, the brain can learn continuously without any signs of catastrophic forgetting. Spiking neural networks (SNNs) are the next generation of ANNs with many features borrowed from biological...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-11 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Antonov, Dmitry Sviatov, Kirill Sukhov, Sergey |
description | Artificial neural networks (ANNs) experience catastrophic forgetting (CF) during sequential learning. In contrast, the brain can learn continuously without any signs of catastrophic forgetting. Spiking neural networks (SNNs) are the next generation of ANNs with many features borrowed from biological neural networks. Thus, SNNs potentially promise better resilience to CF. In this paper, we study the susceptibility of SNNs to CF and test several biologically inspired methods for mitigating catastrophic forgetting. SNNs are trained with biologically plausible local training rules based on spike-timing-dependent plasticity (STDP). Local training prohibits the direct use of CF prevention methods based on gradients of a global loss function. We developed and tested the method to determine the importance of synapses (weights) based on stochastic Langevin dynamics without the need for the gradients. Several other methods of catastrophic forgetting prevention adapted from analog neural networks were tested as well. The experiments were performed on freely available datasets in the SpykeTorch environment. |
doi_str_mv | 10.48550/arxiv.2111.09553 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2111_09553</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2599290600</sourcerecordid><originalsourceid>FETCH-LOGICAL-a953-6cdc5b80ad2820d788fb27cc8bdb98a1ba718ede1de1841103502e754876c9a3</originalsourceid><addsrcrecordid>eNotj9FKwzAUhoMgOOYewCsDXreeJE2b3ClFnTDwQu9LmqSarSY1aZ2-vd0m_HB-OB-H8yF0RSAvBOdwq-KP-84pISQHyTk7QwvKGMlEQekFWqW0BQBaVnTeLdBdHfzo_BSmhHuronf-HYcOp8HtDtXbcR_iLuExKuetwXs3fuA-aNXjOPU2XaLzTvXJrv7nEr0-PrzV62zz8vRc328yJTnLSm00bwUoQwUFUwnRtbTSWrSmlUKRVlVEWGPJHFEQAowDtRUvRFVqqdgSXZ-uHu2aIbpPFX-bg2VztJyJmxMxxPA12TQ22zBFP7_UUC4llVACsD9PZ1VG</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2599290600</pqid></control><display><type>article</type><title>Continuous learning of spiking networks trained with local rules</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Antonov, Dmitry ; Sviatov, Kirill ; Sukhov, Sergey</creator><creatorcontrib>Antonov, Dmitry ; Sviatov, Kirill ; Sukhov, Sergey</creatorcontrib><description>Artificial neural networks (ANNs) experience catastrophic forgetting (CF) during sequential learning. In contrast, the brain can learn continuously without any signs of catastrophic forgetting. Spiking neural networks (SNNs) are the next generation of ANNs with many features borrowed from biological neural networks. Thus, SNNs potentially promise better resilience to CF. In this paper, we study the susceptibility of SNNs to CF and test several biologically inspired methods for mitigating catastrophic forgetting. SNNs are trained with biologically plausible local training rules based on spike-timing-dependent plasticity (STDP). Local training prohibits the direct use of CF prevention methods based on gradients of a global loss function. We developed and tested the method to determine the importance of synapses (weights) based on stochastic Langevin dynamics without the need for the gradients. Several other methods of catastrophic forgetting prevention adapted from analog neural networks were tested as well. The experiments were performed on freely available datasets in the SpykeTorch environment.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2111.09553</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Computer Science - Neural and Evolutionary Computing ; Learning ; Neural networks ; Spiking ; Synapses ; Training</subject><ispartof>arXiv.org, 2021-11</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2111.09553$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1016/j.neunet.2022.09.003$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Antonov, Dmitry</creatorcontrib><creatorcontrib>Sviatov, Kirill</creatorcontrib><creatorcontrib>Sukhov, Sergey</creatorcontrib><title>Continuous learning of spiking networks trained with local rules</title><title>arXiv.org</title><description>Artificial neural networks (ANNs) experience catastrophic forgetting (CF) during sequential learning. In contrast, the brain can learn continuously without any signs of catastrophic forgetting. Spiking neural networks (SNNs) are the next generation of ANNs with many features borrowed from biological neural networks. Thus, SNNs potentially promise better resilience to CF. In this paper, we study the susceptibility of SNNs to CF and test several biologically inspired methods for mitigating catastrophic forgetting. SNNs are trained with biologically plausible local training rules based on spike-timing-dependent plasticity (STDP). Local training prohibits the direct use of CF prevention methods based on gradients of a global loss function. We developed and tested the method to determine the importance of synapses (weights) based on stochastic Langevin dynamics without the need for the gradients. Several other methods of catastrophic forgetting prevention adapted from analog neural networks were tested as well. The experiments were performed on freely available datasets in the SpykeTorch environment.</description><subject>Artificial neural networks</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><subject>Learning</subject><subject>Neural networks</subject><subject>Spiking</subject><subject>Synapses</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj9FKwzAUhoMgOOYewCsDXreeJE2b3ClFnTDwQu9LmqSarSY1aZ2-vd0m_HB-OB-H8yF0RSAvBOdwq-KP-84pISQHyTk7QwvKGMlEQekFWqW0BQBaVnTeLdBdHfzo_BSmhHuronf-HYcOp8HtDtXbcR_iLuExKuetwXs3fuA-aNXjOPU2XaLzTvXJrv7nEr0-PrzV62zz8vRc328yJTnLSm00bwUoQwUFUwnRtbTSWrSmlUKRVlVEWGPJHFEQAowDtRUvRFVqqdgSXZ-uHu2aIbpPFX-bg2VztJyJmxMxxPA12TQ22zBFP7_UUC4llVACsD9PZ1VG</recordid><startdate>20211118</startdate><enddate>20211118</enddate><creator>Antonov, Dmitry</creator><creator>Sviatov, Kirill</creator><creator>Sukhov, Sergey</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20211118</creationdate><title>Continuous learning of spiking networks trained with local rules</title><author>Antonov, Dmitry ; Sviatov, Kirill ; Sukhov, Sergey</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a953-6cdc5b80ad2820d788fb27cc8bdb98a1ba718ede1de1841103502e754876c9a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><topic>Learning</topic><topic>Neural networks</topic><topic>Spiking</topic><topic>Synapses</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Antonov, Dmitry</creatorcontrib><creatorcontrib>Sviatov, Kirill</creatorcontrib><creatorcontrib>Sukhov, Sergey</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Antonov, Dmitry</au><au>Sviatov, Kirill</au><au>Sukhov, Sergey</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Continuous learning of spiking networks trained with local rules</atitle><jtitle>arXiv.org</jtitle><date>2021-11-18</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Artificial neural networks (ANNs) experience catastrophic forgetting (CF) during sequential learning. In contrast, the brain can learn continuously without any signs of catastrophic forgetting. Spiking neural networks (SNNs) are the next generation of ANNs with many features borrowed from biological neural networks. Thus, SNNs potentially promise better resilience to CF. In this paper, we study the susceptibility of SNNs to CF and test several biologically inspired methods for mitigating catastrophic forgetting. SNNs are trained with biologically plausible local training rules based on spike-timing-dependent plasticity (STDP). Local training prohibits the direct use of CF prevention methods based on gradients of a global loss function. We developed and tested the method to determine the importance of synapses (weights) based on stochastic Langevin dynamics without the need for the gradients. Several other methods of catastrophic forgetting prevention adapted from analog neural networks were tested as well. The experiments were performed on freely available datasets in the SpykeTorch environment.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2111.09553</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-11 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2111_09553 |
source | arXiv.org; Free E- Journals |
subjects | Artificial neural networks Computer Science - Neural and Evolutionary Computing Learning Neural networks Spiking Synapses Training |
title | Continuous learning of spiking networks trained with local rules |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T12%3A37%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Continuous%20learning%20of%20spiking%20networks%20trained%20with%20local%20rules&rft.jtitle=arXiv.org&rft.au=Antonov,%20Dmitry&rft.date=2021-11-18&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2111.09553&rft_dat=%3Cproquest_arxiv%3E2599290600%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2599290600&rft_id=info:pmid/&rfr_iscdi=true |