Crosslingual Embeddings are Essential in UNMT for Distant Languages: An English to IndoAryan Case Study
Recent advances in Unsupervised Neural Machine Translation (UNMT) have minimized the gap between supervised and unsupervised machine translation performance for closely related language pairs. However, the situation is very different for distant language pairs. Lack of lexical overlap and low syntac...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in Unsupervised Neural Machine Translation (UNMT) have
minimized the gap between supervised and unsupervised machine translation
performance for closely related language pairs. However, the situation is very
different for distant language pairs. Lack of lexical overlap and low syntactic
similarities such as between English and Indo-Aryan languages leads to poor
translation quality in existing UNMT systems. In this paper, we show that
initializing the embedding layer of UNMT models with cross-lingual embeddings
shows significant improvements in BLEU score over existing approaches with
embeddings randomly initialized. Further, static embeddings (freezing the
embedding layer weights) lead to better gains compared to updating the
embedding layer weights during training (non-static). We experimented using
Masked Sequence to Sequence (MASS) and Denoising Autoencoder (DAE) UNMT
approaches for three distant language pairs. The proposed cross-lingual
embedding initialization yields BLEU score improvement of as much as ten times
over the baseline for English-Hindi, English-Bengali, and English-Gujarati. Our
analysis shows the importance of cross-lingual embedding, comparisons between
approaches, and the scope of improvements in these systems. |
---|---|
DOI: | 10.48550/arxiv.2106.04995 |