Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning

Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-05
Hauptverfasser: Genta Indra Winata, Xie, Lingjue, Radhakrishnan, Karthik, Wu, Shijie, Jin, Xisen, Cheng, Pengxiang, Kulkarni, Mayank, Preotiuc-Pietro, Daniel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Genta Indra Winata
Xie, Lingjue
Radhakrishnan, Karthik
Wu, Shijie
Jin, Xisen
Cheng, Pengxiang
Kulkarni, Mayank
Preotiuc-Pietro, Daniel
description Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this paper, we study catastrophic forgetting, as well as methods to minimize this, in a massively multilingual continual learning framework involving up to 51 languages and covering both classification and sequence labeling tasks. We present LR ADJUST, a learning rate scheduling method that is simple, yet effective in preserving new information without strongly overwriting past knowledge. Furthermore, we show that this method is effective across multiple continual learning approaches. Finally, we provide further insights into the dynamics of catastrophic forgetting in this massively multilingual setup.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2819551997</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2819551997</sourcerecordid><originalsourceid>FETCH-proquest_journals_28195519973</originalsourceid><addsrcrecordid>eNqNitEKgjAYRkcQJOU7DLoW3NZSryXpIvGmexmybLI2278JvX0TeoCuvo9zzgYllDGSlSdKdygFmPI8p-eCcs4S1HWLdIN9KTPiWngB3tn5qQbcWDdK71euDG4FgFqk_uA2aK90xEFoXFsTi_XdpHAm0gPaPoQGmf52j47N5V5fs9nZd5Dg-8kGZ6LqaUkqzklVFey_6gudBj-j</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2819551997</pqid></control><display><type>article</type><title>Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning</title><source>Free E- Journals</source><creator>Genta Indra Winata ; Xie, Lingjue ; Radhakrishnan, Karthik ; Wu, Shijie ; Jin, Xisen ; Cheng, Pengxiang ; Kulkarni, Mayank ; Preotiuc-Pietro, Daniel</creator><creatorcontrib>Genta Indra Winata ; Xie, Lingjue ; Radhakrishnan, Karthik ; Wu, Shijie ; Jin, Xisen ; Cheng, Pengxiang ; Kulkarni, Mayank ; Preotiuc-Pietro, Daniel</creatorcontrib><description>Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this paper, we study catastrophic forgetting, as well as methods to minimize this, in a massively multilingual continual learning framework involving up to 51 languages and covering both classification and sequence labeling tasks. We present LR ADJUST, a learning rate scheduling method that is simple, yet effective in preserving new information without strongly overwriting past knowledge. Furthermore, we show that this method is effective across multiple continual learning approaches. Finally, we provide further insights into the dynamics of catastrophic forgetting in this massively multilingual setup.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Languages ; Learning ; Task scheduling</subject><ispartof>arXiv.org, 2023-05</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Genta Indra Winata</creatorcontrib><creatorcontrib>Xie, Lingjue</creatorcontrib><creatorcontrib>Radhakrishnan, Karthik</creatorcontrib><creatorcontrib>Wu, Shijie</creatorcontrib><creatorcontrib>Jin, Xisen</creatorcontrib><creatorcontrib>Cheng, Pengxiang</creatorcontrib><creatorcontrib>Kulkarni, Mayank</creatorcontrib><creatorcontrib>Preotiuc-Pietro, Daniel</creatorcontrib><title>Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning</title><title>arXiv.org</title><description>Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this paper, we study catastrophic forgetting, as well as methods to minimize this, in a massively multilingual continual learning framework involving up to 51 languages and covering both classification and sequence labeling tasks. We present LR ADJUST, a learning rate scheduling method that is simple, yet effective in preserving new information without strongly overwriting past knowledge. Furthermore, we show that this method is effective across multiple continual learning approaches. Finally, we provide further insights into the dynamics of catastrophic forgetting in this massively multilingual setup.</description><subject>Languages</subject><subject>Learning</subject><subject>Task scheduling</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNitEKgjAYRkcQJOU7DLoW3NZSryXpIvGmexmybLI2278JvX0TeoCuvo9zzgYllDGSlSdKdygFmPI8p-eCcs4S1HWLdIN9KTPiWngB3tn5qQbcWDdK71euDG4FgFqk_uA2aK90xEFoXFsTi_XdpHAm0gPaPoQGmf52j47N5V5fs9nZd5Dg-8kGZ6LqaUkqzklVFey_6gudBj-j</recordid><startdate>20230525</startdate><enddate>20230525</enddate><creator>Genta Indra Winata</creator><creator>Xie, Lingjue</creator><creator>Radhakrishnan, Karthik</creator><creator>Wu, Shijie</creator><creator>Jin, Xisen</creator><creator>Cheng, Pengxiang</creator><creator>Kulkarni, Mayank</creator><creator>Preotiuc-Pietro, Daniel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230525</creationdate><title>Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning</title><author>Genta Indra Winata ; Xie, Lingjue ; Radhakrishnan, Karthik ; Wu, Shijie ; Jin, Xisen ; Cheng, Pengxiang ; Kulkarni, Mayank ; Preotiuc-Pietro, Daniel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28195519973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Languages</topic><topic>Learning</topic><topic>Task scheduling</topic><toplevel>online_resources</toplevel><creatorcontrib>Genta Indra Winata</creatorcontrib><creatorcontrib>Xie, Lingjue</creatorcontrib><creatorcontrib>Radhakrishnan, Karthik</creatorcontrib><creatorcontrib>Wu, Shijie</creatorcontrib><creatorcontrib>Jin, Xisen</creatorcontrib><creatorcontrib>Cheng, Pengxiang</creatorcontrib><creatorcontrib>Kulkarni, Mayank</creatorcontrib><creatorcontrib>Preotiuc-Pietro, Daniel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Genta Indra Winata</au><au>Xie, Lingjue</au><au>Radhakrishnan, Karthik</au><au>Wu, Shijie</au><au>Jin, Xisen</au><au>Cheng, Pengxiang</au><au>Kulkarni, Mayank</au><au>Preotiuc-Pietro, Daniel</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning</atitle><jtitle>arXiv.org</jtitle><date>2023-05-25</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Real-life multilingual systems should be able to efficiently incorporate new languages as data distributions fed to the system evolve and shift over time. To do this, systems need to handle the issue of catastrophic forgetting, where the model performance drops for languages or tasks seen further in its past. In this paper, we study catastrophic forgetting, as well as methods to minimize this, in a massively multilingual continual learning framework involving up to 51 languages and covering both classification and sequence labeling tasks. We present LR ADJUST, a learning rate scheduling method that is simple, yet effective in preserving new information without strongly overwriting past knowledge. Furthermore, we show that this method is effective across multiple continual learning approaches. Finally, we provide further insights into the dynamics of catastrophic forgetting in this massively multilingual setup.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2819551997
source Free E- Journals
subjects Languages
Learning
Task scheduling
title Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T15%3A00%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Overcoming%20Catastrophic%20Forgetting%20in%20Massively%20Multilingual%20Continual%20Learning&rft.jtitle=arXiv.org&rft.au=Genta%20Indra%20Winata&rft.date=2023-05-25&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2819551997%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2819551997&rft_id=info:pmid/&rfr_iscdi=true