Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation
The data scarcity in low-resource languages has become a bottleneck to building robust neural machine translation systems. Fine-tuning a multilingual pre-trained model (e.g., mBART (Liu et al., 2020)) on the translation task is a good approach for low-resource languages; however, its performance wil...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The data scarcity in low-resource languages has become a bottleneck to
building robust neural machine translation systems. Fine-tuning a multilingual
pre-trained model (e.g., mBART (Liu et al., 2020)) on the translation task is a
good approach for low-resource languages; however, its performance will be
greatly limited when there are unseen languages in the translation pairs. In
this paper, we present a continual pre-training (CPT) framework on mBART to
effectively adapt it to unseen languages. We first construct noisy
mixed-language text from the monolingual corpus of the target language in the
translation pair to cover both the source and target languages, and then, we
continue pre-training mBART to reconstruct the original monolingual text.
Results show that our method can consistently improve the fine-tuning
performance upon the mBART baseline, as well as other strong baselines, across
all tested low-resource translation pairs containing unseen languages.
Furthermore, our approach also boosts the performance on translation pairs
where both languages are seen in the original mBART's pre-training. The code is
available at https://github.com/zliucr/cpt-nmt. |
---|---|
DOI: | 10.48550/arxiv.2105.03953 |