Is Robustness Transferable across Languages in Multilingual Neural Machine Translation?
Robustness, the ability of models to maintain performance in the face of perturbations, is critical for developing reliable NLP systems. Recent studies have shown promising results in improving the robustness of models through adversarial training and data augmentation. However, in machine translati...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Robustness, the ability of models to maintain performance in the face of
perturbations, is critical for developing reliable NLP systems. Recent studies
have shown promising results in improving the robustness of models through
adversarial training and data augmentation. However, in machine translation,
most of these studies have focused on bilingual machine translation with a
single translation direction. In this paper, we investigate the transferability
of robustness across different languages in multilingual neural machine
translation. We propose a robustness transfer analysis protocol and conduct a
series of experiments. In particular, we use character-, word-, and multi-level
noises to attack the specific translation direction of the multilingual neural
machine translation model and evaluate the robustness of other translation
directions. Our findings demonstrate that the robustness gained in one
translation direction can indeed transfer to other translation directions.
Additionally, we empirically find scenarios where robustness to character-level
noise and word-level noise is more likely to transfer. |
---|---|
DOI: | 10.48550/arxiv.2310.20162 |