A Case Study on Context-Aware Neural Machine Translation with Multi-Task Learning
In document-level neural machine translation (DocNMT), multi-encoder approaches are common in encoding context and source sentences. Recent studies \cite{li-etal-2020-multi-encoder} have shown that the context encoder generates noise and makes the model robust to the choice of context. This paper fu...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In document-level neural machine translation (DocNMT), multi-encoder
approaches are common in encoding context and source sentences. Recent studies
\cite{li-etal-2020-multi-encoder} have shown that the context encoder generates
noise and makes the model robust to the choice of context. This paper further
investigates this observation by explicitly modelling context encoding through
multi-task learning (MTL) to make the model sensitive to the choice of context.
We conduct experiments on cascade MTL architecture, which consists of one
encoder and two decoders. Generation of the source from the context is
considered an auxiliary task, and generation of the target from the source is
the main task. We experimented with German--English language pairs on News,
TED, and Europarl corpora. Evaluation results show that the proposed MTL
approach performs better than concatenation-based and multi-encoder DocNMT
models in low-resource settings and is sensitive to the choice of context.
However, we observe that the MTL models are failing to generate the source from
the context. These observations align with the previous studies, and this might
suggest that the available document-level parallel corpora are not
context-aware, and a robust sentence-level model can outperform the
context-aware models. |
---|---|
DOI: | 10.48550/arxiv.2407.03076 |