Dynamically Modular and Sparse General Continual Learning

Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid ca...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-01
Hauptverfasser: Varma, Arnav, Arani, Elahe, Zonooz, Bahram
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Varma, Arnav
Arani, Elahe
Zonooz, Bahram
description Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid catastrophic forgetting, rehearsal-based methods have proven effective. However, they are still prone to forgetting due to task-interference as all parameters respond to all tasks. To counter this, we take inspiration from sparse coding in the brain and introduce dynamic modularity and sparsity (Dynamos) for rehearsal-based general continual learning. In this setup, the DNN learns to respond to stimuli by activating relevant subsets of neurons. We demonstrate the effectiveness of Dynamos on multiple datasets under challenging continual learning evaluation protocols. Finally, we show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2760362583</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2760362583</sourcerecordid><originalsourceid>FETCH-proquest_journals_27603625833</originalsourceid><addsrcrecordid>eNqNirsKwjAUQIMgWLT_EHAuxBuT1rm-Bp10LxcbJSXe1KQZ-vd28AOczoFzZiwDKTdFtQVYsDzGTggBugSlZMZ2-5HwbR_o3Mivvk0OA0dq-a3HEA0_GTIBHa89DZbSZBeDgSy9Vmz-RBdN_uOSrY-He30u-uA_ycSh6XwKNKUGSi2kBlVJ-d_1BXbKNlk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2760362583</pqid></control><display><type>article</type><title>Dynamically Modular and Sparse General Continual Learning</title><source>Free E- Journals</source><creator>Varma, Arnav ; Arani, Elahe ; Zonooz, Bahram</creator><creatorcontrib>Varma, Arnav ; Arani, Elahe ; Zonooz, Bahram</creatorcontrib><description>Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid catastrophic forgetting, rehearsal-based methods have proven effective. However, they are still prone to forgetting due to task-interference as all parameters respond to all tasks. To counter this, we take inspiration from sparse coding in the brain and introduce dynamic modularity and sparsity (Dynamos) for rehearsal-based general continual learning. In this setup, the DNN learns to respond to stimuli by activating relevant subsets of neurons. We demonstrate the effectiveness of Dynamos on multiple datasets under challenging continual learning evaluation protocols. Finally, we show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Machine learning ; Modularity ; Neurons ; Rotating generators ; Stimuli</subject><ispartof>arXiv.org, 2023-01</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-nc-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Varma, Arnav</creatorcontrib><creatorcontrib>Arani, Elahe</creatorcontrib><creatorcontrib>Zonooz, Bahram</creatorcontrib><title>Dynamically Modular and Sparse General Continual Learning</title><title>arXiv.org</title><description>Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid catastrophic forgetting, rehearsal-based methods have proven effective. However, they are still prone to forgetting due to task-interference as all parameters respond to all tasks. To counter this, we take inspiration from sparse coding in the brain and introduce dynamic modularity and sparsity (Dynamos) for rehearsal-based general continual learning. In this setup, the DNN learns to respond to stimuli by activating relevant subsets of neurons. We demonstrate the effectiveness of Dynamos on multiple datasets under challenging continual learning evaluation protocols. Finally, we show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli.</description><subject>Artificial neural networks</subject><subject>Machine learning</subject><subject>Modularity</subject><subject>Neurons</subject><subject>Rotating generators</subject><subject>Stimuli</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNirsKwjAUQIMgWLT_EHAuxBuT1rm-Bp10LxcbJSXe1KQZ-vd28AOczoFzZiwDKTdFtQVYsDzGTggBugSlZMZ2-5HwbR_o3Mivvk0OA0dq-a3HEA0_GTIBHa89DZbSZBeDgSy9Vmz-RBdN_uOSrY-He30u-uA_ycSh6XwKNKUGSi2kBlVJ-d_1BXbKNlk</recordid><startdate>20230102</startdate><enddate>20230102</enddate><creator>Varma, Arnav</creator><creator>Arani, Elahe</creator><creator>Zonooz, Bahram</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230102</creationdate><title>Dynamically Modular and Sparse General Continual Learning</title><author>Varma, Arnav ; Arani, Elahe ; Zonooz, Bahram</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27603625833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial neural networks</topic><topic>Machine learning</topic><topic>Modularity</topic><topic>Neurons</topic><topic>Rotating generators</topic><topic>Stimuli</topic><toplevel>online_resources</toplevel><creatorcontrib>Varma, Arnav</creatorcontrib><creatorcontrib>Arani, Elahe</creatorcontrib><creatorcontrib>Zonooz, Bahram</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Varma, Arnav</au><au>Arani, Elahe</au><au>Zonooz, Bahram</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Dynamically Modular and Sparse General Continual Learning</atitle><jtitle>arXiv.org</jtitle><date>2023-01-02</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Real-world applications often require learning continuously from a stream of data under ever-changing conditions. When trying to learn from such non-stationary data, deep neural networks (DNNs) undergo catastrophic forgetting of previously learned information. Among the common approaches to avoid catastrophic forgetting, rehearsal-based methods have proven effective. However, they are still prone to forgetting due to task-interference as all parameters respond to all tasks. To counter this, we take inspiration from sparse coding in the brain and introduce dynamic modularity and sparsity (Dynamos) for rehearsal-based general continual learning. In this setup, the DNN learns to respond to stimuli by activating relevant subsets of neurons. We demonstrate the effectiveness of Dynamos on multiple datasets under challenging continual learning evaluation protocols. Finally, we show that our method learns representations that are modular and specialized, while maintaining reusability by activating subsets of neurons with overlaps corresponding to the similarity of stimuli.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-01
issn 2331-8422
language eng
recordid cdi_proquest_journals_2760362583
source Free E- Journals
subjects Artificial neural networks
Machine learning
Modularity
Neurons
Rotating generators
Stimuli
title Dynamically Modular and Sparse General Continual Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T05%3A20%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Dynamically%20Modular%20and%20Sparse%20General%20Continual%20Learning&rft.jtitle=arXiv.org&rft.au=Varma,%20Arnav&rft.date=2023-01-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2760362583%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2760362583&rft_id=info:pmid/&rfr_iscdi=true