Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation
Neural machine translation (NMT) models usually suffer from catastrophic forgetting during continual training where the models tend to gradually forget previously learned knowledge and swing to fit the newly added data which may have a different distribution, e.g. a different domain. Although many m...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural machine translation (NMT) models usually suffer from catastrophic
forgetting during continual training where the models tend to gradually forget
previously learned knowledge and swing to fit the newly added data which may
have a different distribution, e.g. a different domain. Although many methods
have been proposed to solve this problem, we cannot get to know what causes
this phenomenon yet. Under the background of domain adaptation, we investigate
the cause of catastrophic forgetting from the perspectives of modules and
parameters (neurons). The investigation on the modules of the NMT model shows
that some modules have tight relation with the general-domain knowledge while
some other modules are more essential in the domain adaptation. And the
investigation on the parameters shows that some parameters are important for
both the general-domain and in-domain translation and the great change of them
during continual training brings about the performance decline in
general-domain. We conduct experiments across different language pairs and
domains to ensure the validity and reliability of our findings. |
---|---|
DOI: | 10.48550/arxiv.2011.00678 |