Denoising-based UNMT is more robust to word-order divergence than MASS-based UNMT
We aim to investigate whether UNMT approaches with self-supervised pre-training are robust to word-order divergence between language pairs. We achieve this by comparing two models pre-trained with the same self-supervised pre-training objective. The first model is trained on language pairs with diff...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We aim to investigate whether UNMT approaches with self-supervised
pre-training are robust to word-order divergence between language pairs. We
achieve this by comparing two models pre-trained with the same self-supervised
pre-training objective. The first model is trained on language pairs with
different word-orders, and the second model is trained on the same language
pairs with source language re-ordered to match the word-order of the target
language. Ideally, UNMT approaches which are robust to word-order divergence
should exhibit no visible performance difference between the two
configurations. In this paper, we investigate two such self-supervised
pre-training based UNMT approaches, namely Masked Sequence-to-Sequence
Pre-Training, (MASS) (which does not have shuffling noise) and Denoising
AutoEncoder (DAE), (which has shuffling noise).
We experiment with five English$\rightarrow$Indic language pairs, i.e.,
en-hi, en-bn, en-gu, en-kn, and en-ta) where word-order of the source language
is SVO (Subject-Verb-Object), and the word-order of the target languages is SOV
(Subject-Object-Verb). We observed that for these language pairs, DAE-based
UNMT approach consistently outperforms MASS in terms of translation accuracies.
Moreover, bridging the word-order gap using reordering improves the translation
accuracy of MASS-based UNMT models, while it cannot improve the translation
accuracy of DAE-based UNMT models. This observation indicates that DAE-based
UNMT is more robust to word-order divergence than MASS-based UNMT.
Word-shuffling noise in DAE approach could be the possible reason for the
approach being robust to word-order divergence. |
---|---|
DOI: | 10.48550/arxiv.2303.01191 |