Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval
Recent research demonstrates the effectiveness of using pretrained language models (PLM) to improve dense retrieval and multilingual dense retrieval. In this work, we present a simple but effective monolingual pretraining task called contrastive context prediction~(CCP) to learn sentence representat...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent research demonstrates the effectiveness of using pretrained language
models (PLM) to improve dense retrieval and multilingual dense retrieval. In
this work, we present a simple but effective monolingual pretraining task
called contrastive context prediction~(CCP) to learn sentence representation by
modeling sentence level contextual relation. By pushing the embedding of
sentences in a local context closer and pushing random negative samples away,
different languages could form isomorphic structure, then sentence pairs in two
different languages will be automatically aligned. Our experiments show that
model collapse and information leakage are very easy to happen during
contrastive training of language model, but language-specific memory bank and
asymmetric batch normalization operation play an essential role in preventing
collapsing and information leakage, respectively. Besides, a post-processing
for sentence embedding is also very effective to achieve better retrieval
performance. On the multilingual sentence retrieval task Tatoeba, our model
achieves new SOTA results among methods without using bilingual data. Our model
also shows larger gain on Tatoeba when transferring between non-English pairs.
On two multi-lingual query-passage retrieval tasks, XOR Retrieve and Mr.TYDI,
our model even achieves two SOTA results in both zero-shot and supervised
setting among all pretraining models using bilingual data. |
---|---|
DOI: | 10.48550/arxiv.2206.03281 |