Order-Agnostic Cross Entropy for Non-Autoregressive Machine Translation
We propose a new training objective named order-agnostic cross entropy (OaXE) for fully non-autoregressive translation (NAT) models. OaXE improves the standard cross-entropy loss to ameliorate the effect of word reordering, which is a common source of the critical multimodality problem in NAT. Concr...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a new training objective named order-agnostic cross entropy (OaXE)
for fully non-autoregressive translation (NAT) models. OaXE improves the
standard cross-entropy loss to ameliorate the effect of word reordering, which
is a common source of the critical multimodality problem in NAT. Concretely,
OaXE removes the penalty for word order errors, and computes the cross entropy
loss based on the best possible alignment between model predictions and target
tokens. Since the log loss is very sensitive to invalid references, we leverage
cross entropy initialization and loss truncation to ensure the model focuses on
a good part of the search space. Extensive experiments on major WMT benchmarks
show that OaXE substantially improves translation performance, setting new
state of the art for fully NAT models. Further analyses show that OaXE
alleviates the multimodality problem by reducing token repetitions and
increasing prediction confidence. Our code, data, and trained models are
available at https://github.com/tencent-ailab/ICML21_OAXE. |
---|---|
DOI: | 10.48550/arxiv.2106.05093 |