Bridging the Gap between Language Models and Cross-Lingual Sequence Labeling
Large-scale cross-lingual pre-trained language models (xPLMs) have shown effectiveness in cross-lingual sequence labeling tasks (xSL), such as cross-lingual machine reading comprehension (xMRC) by transferring knowledge from a high-resource language to low-resource languages. Despite the great succe...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large-scale cross-lingual pre-trained language models (xPLMs) have shown
effectiveness in cross-lingual sequence labeling tasks (xSL), such as
cross-lingual machine reading comprehension (xMRC) by transferring knowledge
from a high-resource language to low-resource languages. Despite the great
success, we draw an empirical observation that there is a training objective
gap between pre-training and fine-tuning stages: e.g., mask language modeling
objective requires local understanding of the masked token and the
span-extraction objective requires global understanding and reasoning of the
input passage/paragraph and question, leading to the discrepancy between
pre-training and xMRC. In this paper, we first design a pre-training task
tailored for xSL named Cross-lingual Language Informative Span Masking (CLISM)
to eliminate the objective gap in a self-supervised manner. Second, we present
ContrAstive-Consistency Regularization (CACR), which utilizes contrastive
learning to encourage the consistency between representations of input parallel
sequences via unsupervised cross-lingual instance-wise training signals during
pre-training. By these means, our methods not only bridge the gap between
pretrain-finetune, but also enhance PLMs to better capture the alignment
between different languages. Extensive experiments prove that our method
achieves clearly superior results on multiple xSL benchmarks with limited
pre-training data. Our methods also surpass the previous state-of-the-art
methods by a large margin in few-shot data settings, where only a few hundred
training examples are available. |
---|---|
DOI: | 10.48550/arxiv.2204.05210 |