Target-Oriented Knowledge Distillation with Language-Family-Based Grouping for Multilingual NMT

Multilingual NMT has developed rapidly, but still has performance degradation caused by language diversity and model capacity constraints. To achieve the competitive accuracy of multilingual translation despite such limitations, knowledge distillation, which improves the student network by matching...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on Asian and low-resource language information processing 2023-03, Vol.22 (2), p.1-18, Article 42
Hauptverfasser: Do, Heejin, Lee, Gary Geunbae
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multilingual NMT has developed rapidly, but still has performance degradation caused by language diversity and model capacity constraints. To achieve the competitive accuracy of multilingual translation despite such limitations, knowledge distillation, which improves the student network by matching the teacher network’s output, has been applied and shown enhancement by focusing on the important parts of the teacher distribution. However, existing knowledge distillation methods for multilingual NMT rarely consider the knowledge, which has an important function as the student model’s target, in the process. In this article, we propose two distillation strategies that effectively use the knowledge to improve the accuracy of multilingual NMT. First, we introduce a language-family-based approach, guiding to select appropriate knowledge for each language pair. By distilling the knowledge of multilingual teachers that each processes a group of languages classified by language families, the multilingual model overcomes accuracy degradation caused by linguistic diversity. Second, we propose target-oriented knowledge distillation, which intensively focuses on the ground-truth target of knowledge with a penalty strategy. Our method provides a sensible distillation by penalizing samples without actual targets, while additionally targeting the ground-truth targets. Experiments using TED Talk datasets demonstrate the effectiveness of our method with BLEU scores increment. Discussions of distilled knowledge and further observations of the methods also validate our results.
ISSN:2375-4699
2375-4702
DOI:10.1145/3546067