DOCmT5: Document-Level Pretraining of Multilingual Language Models
In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained with large scale parallel documents. While previous approaches have focused on leveraging sentence-level parallel data, we try to build a general-purpose pretrained model that can understand and generat...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence
language model pretrained with large scale parallel documents. While previous
approaches have focused on leveraging sentence-level parallel data, we try to
build a general-purpose pretrained model that can understand and generate long
documents. We propose a simple and effective pretraining objective - Document
reordering Machine Translation (DrMT), in which the input documents that are
shuffled and masked need to be translated. DrMT brings consistent improvements
over strong baselines on a variety of document-level generation tasks,
including over 12 BLEU points for seen-language-pair document-level MT, over 7
BLEU points for unseen-language-pair document-level MT and over 3 ROUGE-1
points for seen-language-pair cross-lingual summarization. We achieve
state-of-the-art (SOTA) on WMT20 De-En and IWSLT15 Zh-En document translation
tasks. We also conduct extensive analysis on various factors for document
pretraining, including (1) The effects of pretraining data quality and (2) The
effects of combining mono-lingual and cross-lingual pretraining. We plan to
make our model checkpoints publicly available. |
---|---|
DOI: | 10.48550/arxiv.2112.08709 |