Mitigating Catastrophic Forgetting in Long Short-Term Memory Networks

Continual learning on sequential data is critical for many machine learning (ML) deployments. Unfortunately, LSTM networks, which are commonly used to learn on sequential data, suffer from catastrophic forgetting and are limited in their ability to learn multiple tasks continually. We discover that...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Joshi, Ketaki, Pothukuchi, Raghavendra Pradyumna, Wibisono, Andre, Bhattacharjee, Abhishek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Joshi, Ketaki
Pothukuchi, Raghavendra Pradyumna
Wibisono, Andre
Bhattacharjee, Abhishek
description Continual learning on sequential data is critical for many machine learning (ML) deployments. Unfortunately, LSTM networks, which are commonly used to learn on sequential data, suffer from catastrophic forgetting and are limited in their ability to learn multiple tasks continually. We discover that catastrophic forgetting in LSTM networks can be overcome in two novel and readily-implementable ways -- separating the LSTM memory either for each task or for each target label. Our approach eschews the need for explicit regularization, hypernetworks, and other complex methods. We quantify the benefits of our approach on recently-proposed LSTM networks for computer memory access prefetching, an important sequential learning problem in ML-based computer system optimization. Compared to state-of-the-art weight regularization methods to mitigate catastrophic forgetting, our approach is simple, effective, and enables faster learning. We also show that our proposal enables the use of small, non-regularized LSTM networks for complex natural language processing in the offline learning scenario, which was previously considered difficult.
doi_str_mv 10.48550/arxiv.2305.17244
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_17244</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_17244</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-7949f03919e489639113f3472e33e16657e01a3fd136285e1be9afff0762b3853</originalsourceid><addsrcrecordid>eNotj71OwzAURr0woMIDdMIvkNT29U88oqgFpLQMZI9cep1aJXXlWIW-PSVlOp--4UiHkDlnpayUYguXfsK5FMBUyY2Q8p4s1yGH3uVw7Gntshtziqd9-KSrmHrM0x-OtIlXfuxjykWLaaBrHGK60A3m75gO4wO58-5rxMd_zki7Wrb1a9G8v7zVz03htJGFsdJ6BpZblJXV18HBgzQCAZBrrQwy7sDvOGhRKeRbtM57z4wWW6gUzMjTTTt1dKcUBpcu3V9PN_XAL2vHRI8</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Mitigating Catastrophic Forgetting in Long Short-Term Memory Networks</title><source>arXiv.org</source><creator>Joshi, Ketaki ; Pothukuchi, Raghavendra Pradyumna ; Wibisono, Andre ; Bhattacharjee, Abhishek</creator><creatorcontrib>Joshi, Ketaki ; Pothukuchi, Raghavendra Pradyumna ; Wibisono, Andre ; Bhattacharjee, Abhishek</creatorcontrib><description>Continual learning on sequential data is critical for many machine learning (ML) deployments. Unfortunately, LSTM networks, which are commonly used to learn on sequential data, suffer from catastrophic forgetting and are limited in their ability to learn multiple tasks continually. We discover that catastrophic forgetting in LSTM networks can be overcome in two novel and readily-implementable ways -- separating the LSTM memory either for each task or for each target label. Our approach eschews the need for explicit regularization, hypernetworks, and other complex methods. We quantify the benefits of our approach on recently-proposed LSTM networks for computer memory access prefetching, an important sequential learning problem in ML-based computer system optimization. Compared to state-of-the-art weight regularization methods to mitigate catastrophic forgetting, our approach is simple, effective, and enables faster learning. We also show that our proposal enables the use of small, non-regularized LSTM networks for complex natural language processing in the offline learning scenario, which was previously considered difficult.</description><identifier>DOI: 10.48550/arxiv.2305.17244</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2023-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.17244$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.17244$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Joshi, Ketaki</creatorcontrib><creatorcontrib>Pothukuchi, Raghavendra Pradyumna</creatorcontrib><creatorcontrib>Wibisono, Andre</creatorcontrib><creatorcontrib>Bhattacharjee, Abhishek</creatorcontrib><title>Mitigating Catastrophic Forgetting in Long Short-Term Memory Networks</title><description>Continual learning on sequential data is critical for many machine learning (ML) deployments. Unfortunately, LSTM networks, which are commonly used to learn on sequential data, suffer from catastrophic forgetting and are limited in their ability to learn multiple tasks continually. We discover that catastrophic forgetting in LSTM networks can be overcome in two novel and readily-implementable ways -- separating the LSTM memory either for each task or for each target label. Our approach eschews the need for explicit regularization, hypernetworks, and other complex methods. We quantify the benefits of our approach on recently-proposed LSTM networks for computer memory access prefetching, an important sequential learning problem in ML-based computer system optimization. Compared to state-of-the-art weight regularization methods to mitigate catastrophic forgetting, our approach is simple, effective, and enables faster learning. We also show that our proposal enables the use of small, non-regularized LSTM networks for complex natural language processing in the offline learning scenario, which was previously considered difficult.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71OwzAURr0woMIDdMIvkNT29U88oqgFpLQMZI9cep1aJXXlWIW-PSVlOp--4UiHkDlnpayUYguXfsK5FMBUyY2Q8p4s1yGH3uVw7Gntshtziqd9-KSrmHrM0x-OtIlXfuxjykWLaaBrHGK60A3m75gO4wO58-5rxMd_zki7Wrb1a9G8v7zVz03htJGFsdJ6BpZblJXV18HBgzQCAZBrrQwy7sDvOGhRKeRbtM57z4wWW6gUzMjTTTt1dKcUBpcu3V9PN_XAL2vHRI8</recordid><startdate>20230526</startdate><enddate>20230526</enddate><creator>Joshi, Ketaki</creator><creator>Pothukuchi, Raghavendra Pradyumna</creator><creator>Wibisono, Andre</creator><creator>Bhattacharjee, Abhishek</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230526</creationdate><title>Mitigating Catastrophic Forgetting in Long Short-Term Memory Networks</title><author>Joshi, Ketaki ; Pothukuchi, Raghavendra Pradyumna ; Wibisono, Andre ; Bhattacharjee, Abhishek</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-7949f03919e489639113f3472e33e16657e01a3fd136285e1be9afff0762b3853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Joshi, Ketaki</creatorcontrib><creatorcontrib>Pothukuchi, Raghavendra Pradyumna</creatorcontrib><creatorcontrib>Wibisono, Andre</creatorcontrib><creatorcontrib>Bhattacharjee, Abhishek</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Joshi, Ketaki</au><au>Pothukuchi, Raghavendra Pradyumna</au><au>Wibisono, Andre</au><au>Bhattacharjee, Abhishek</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mitigating Catastrophic Forgetting in Long Short-Term Memory Networks</atitle><date>2023-05-26</date><risdate>2023</risdate><abstract>Continual learning on sequential data is critical for many machine learning (ML) deployments. Unfortunately, LSTM networks, which are commonly used to learn on sequential data, suffer from catastrophic forgetting and are limited in their ability to learn multiple tasks continually. We discover that catastrophic forgetting in LSTM networks can be overcome in two novel and readily-implementable ways -- separating the LSTM memory either for each task or for each target label. Our approach eschews the need for explicit regularization, hypernetworks, and other complex methods. We quantify the benefits of our approach on recently-proposed LSTM networks for computer memory access prefetching, an important sequential learning problem in ML-based computer system optimization. Compared to state-of-the-art weight regularization methods to mitigate catastrophic forgetting, our approach is simple, effective, and enables faster learning. We also show that our proposal enables the use of small, non-regularized LSTM networks for complex natural language processing in the offline learning scenario, which was previously considered difficult.</abstract><doi>10.48550/arxiv.2305.17244</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2305.17244
ispartof
issn
language eng
recordid cdi_arxiv_primary_2305_17244
source arXiv.org
subjects Computer Science - Learning
title Mitigating Catastrophic Forgetting in Long Short-Term Memory Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T05%3A32%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mitigating%20Catastrophic%20Forgetting%20in%20Long%20Short-Term%20Memory%20Networks&rft.au=Joshi,%20Ketaki&rft.date=2023-05-26&rft_id=info:doi/10.48550/arxiv.2305.17244&rft_dat=%3Carxiv_GOX%3E2305_17244%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true