Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation

Neural machine translation (NMT) models usually suffer from catastrophic forgetting during continual training where the models tend to gradually forget previously learned knowledge and swing to fit the newly added data which may have a different distribution, e.g. a different domain. Although many m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gu, Shuhao, Feng, Yang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gu, Shuhao
Feng, Yang
description Neural machine translation (NMT) models usually suffer from catastrophic forgetting during continual training where the models tend to gradually forget previously learned knowledge and swing to fit the newly added data which may have a different distribution, e.g. a different domain. Although many methods have been proposed to solve this problem, we cannot get to know what causes this phenomenon yet. Under the background of domain adaptation, we investigate the cause of catastrophic forgetting from the perspectives of modules and parameters (neurons). The investigation on the modules of the NMT model shows that some modules have tight relation with the general-domain knowledge while some other modules are more essential in the domain adaptation. And the investigation on the parameters shows that some parameters are important for both the general-domain and in-domain translation and the great change of them during continual training brings about the performance decline in general-domain. We conduct experiments across different language pairs and domains to ensure the validity and reliability of our findings.
doi_str_mv 10.48550/arxiv.2011.00678
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2011_00678</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2011_00678</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-2bdfe3d4a7c08fd88307ed460c65474a868b41d3eae2dbcd1e7194d48a6ef12b3</originalsourceid><addsrcrecordid>eNotj8FOhDAURbtxYUY_wJX8ANjS0naWE3R0khndsJY86CvTBNtJgYn-vYCuXu65yc07hDwwmgldFPQJ4re7ZjllLKNUKn1LPg_-isPoOhid75ISRhjGGC5n1yb7EDscV_48xbUOfo4T9EkVwfkF2RCTd5zizE7Qnp3HpfNDPw8Gf0duLPQD3v_fDan2L1X5lh4_Xg_l7pjC_EWaN8YiNwJUS7U1WnOq0AhJW1kIJUBL3QhmOALmpmkNQ8W2wggNEi3LG74hj3-zq2B9ie4L4k-9iNarKP8FFH9QXQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation</title><source>arXiv.org</source><creator>Gu, Shuhao ; Feng, Yang</creator><creatorcontrib>Gu, Shuhao ; Feng, Yang</creatorcontrib><description>Neural machine translation (NMT) models usually suffer from catastrophic forgetting during continual training where the models tend to gradually forget previously learned knowledge and swing to fit the newly added data which may have a different distribution, e.g. a different domain. Although many methods have been proposed to solve this problem, we cannot get to know what causes this phenomenon yet. Under the background of domain adaptation, we investigate the cause of catastrophic forgetting from the perspectives of modules and parameters (neurons). The investigation on the modules of the NMT model shows that some modules have tight relation with the general-domain knowledge while some other modules are more essential in the domain adaptation. And the investigation on the parameters shows that some parameters are important for both the general-domain and in-domain translation and the great change of them during continual training brings about the performance decline in general-domain. We conduct experiments across different language pairs and domains to ensure the validity and reliability of our findings.</description><identifier>DOI: 10.48550/arxiv.2011.00678</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2020-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2011.00678$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2011.00678$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gu, Shuhao</creatorcontrib><creatorcontrib>Feng, Yang</creatorcontrib><title>Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation</title><description>Neural machine translation (NMT) models usually suffer from catastrophic forgetting during continual training where the models tend to gradually forget previously learned knowledge and swing to fit the newly added data which may have a different distribution, e.g. a different domain. Although many methods have been proposed to solve this problem, we cannot get to know what causes this phenomenon yet. Under the background of domain adaptation, we investigate the cause of catastrophic forgetting from the perspectives of modules and parameters (neurons). The investigation on the modules of the NMT model shows that some modules have tight relation with the general-domain knowledge while some other modules are more essential in the domain adaptation. And the investigation on the parameters shows that some parameters are important for both the general-domain and in-domain translation and the great change of them during continual training brings about the performance decline in general-domain. We conduct experiments across different language pairs and domains to ensure the validity and reliability of our findings.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FOhDAURbtxYUY_wJX8ANjS0naWE3R0khndsJY86CvTBNtJgYn-vYCuXu65yc07hDwwmgldFPQJ4re7ZjllLKNUKn1LPg_-isPoOhid75ISRhjGGC5n1yb7EDscV_48xbUOfo4T9EkVwfkF2RCTd5zizE7Qnp3HpfNDPw8Gf0duLPQD3v_fDan2L1X5lh4_Xg_l7pjC_EWaN8YiNwJUS7U1WnOq0AhJW1kIJUBL3QhmOALmpmkNQ8W2wggNEi3LG74hj3-zq2B9ie4L4k-9iNarKP8FFH9QXQ</recordid><startdate>20201101</startdate><enddate>20201101</enddate><creator>Gu, Shuhao</creator><creator>Feng, Yang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201101</creationdate><title>Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation</title><author>Gu, Shuhao ; Feng, Yang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-2bdfe3d4a7c08fd88307ed460c65474a868b41d3eae2dbcd1e7194d48a6ef12b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Gu, Shuhao</creatorcontrib><creatorcontrib>Feng, Yang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gu, Shuhao</au><au>Feng, Yang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation</atitle><date>2020-11-01</date><risdate>2020</risdate><abstract>Neural machine translation (NMT) models usually suffer from catastrophic forgetting during continual training where the models tend to gradually forget previously learned knowledge and swing to fit the newly added data which may have a different distribution, e.g. a different domain. Although many methods have been proposed to solve this problem, we cannot get to know what causes this phenomenon yet. Under the background of domain adaptation, we investigate the cause of catastrophic forgetting from the perspectives of modules and parameters (neurons). The investigation on the modules of the NMT model shows that some modules have tight relation with the general-domain knowledge while some other modules are more essential in the domain adaptation. And the investigation on the parameters shows that some parameters are important for both the general-domain and in-domain translation and the great change of them during continual training brings about the performance decline in general-domain. We conduct experiments across different language pairs and domains to ensure the validity and reliability of our findings.</abstract><doi>10.48550/arxiv.2011.00678</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2011.00678
ispartof
issn
language eng
recordid cdi_arxiv_primary_2011_00678
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T13%3A39%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Investigating%20Catastrophic%20Forgetting%20During%20Continual%20Training%20for%20Neural%20Machine%20Translation&rft.au=Gu,%20Shuhao&rft.date=2020-11-01&rft_id=info:doi/10.48550/arxiv.2011.00678&rft_dat=%3Carxiv_GOX%3E2011_00678%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true