G-Transformer for Document-level Machine Translation
Document-level MT models are still far from satisfactory. Existing work extend translation unit from single sentence to multiple sentences. However, study shows that when we further enlarge the translation unit to a whole document, supervised training of Transformer can fail. In this paper, we find...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Document-level MT models are still far from satisfactory. Existing work
extend translation unit from single sentence to multiple sentences. However,
study shows that when we further enlarge the translation unit to a whole
document, supervised training of Transformer can fail. In this paper, we find
such failure is not caused by overfitting, but by sticking around local minima
during training. Our analysis shows that the increased complexity of
target-to-source attention is a reason for the failure. As a solution, we
propose G-Transformer, introducing locality assumption as an inductive bias
into Transformer, reducing the hypothesis space of the attention from target to
source. Experiments show that G-Transformer converges faster and more stably
than Transformer, achieving new state-of-the-art BLEU scores for both
non-pretraining and pre-training settings on three benchmark datasets. |
---|---|
DOI: | 10.48550/arxiv.2105.14761 |