Pre-training Tasks for Embedding-based Large-scale Retrieval
We consider the large-scale query-document retrieval problem: given a query (e.g., a question), return the set of relevant documents (e.g., paragraphs containing the answer) from a large document corpus. This problem is often solved in two steps. The retrieval phase first reduces the solution space,...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider the large-scale query-document retrieval problem: given a query
(e.g., a question), return the set of relevant documents (e.g., paragraphs
containing the answer) from a large document corpus. This problem is often
solved in two steps. The retrieval phase first reduces the solution space,
returning a subset of candidate documents. The scoring phase then re-ranks the
documents. Critically, the retrieval algorithm not only desires high recall but
also requires to be highly efficient, returning candidates in time sublinear to
the number of documents. Unlike the scoring phase witnessing significant
advances recently due to the BERT-style pre-training tasks on cross-attention
models, the retrieval phase remains less well studied. Most previous works rely
on classic Information Retrieval (IR) methods such as BM-25 (token matching +
TF-IDF weights). These models only accept sparse handcrafted features and can
not be optimized for different downstream tasks of interest. In this paper, we
conduct a comprehensive study on the embedding-based retrieval models. We show
that the key ingredient of learning a strong embedding-based Transformer model
is the set of pre-training tasks. With adequately designed paragraph-level
pre-training tasks, the Transformer models can remarkably improve over the
widely-used BM-25 as well as embedding models without Transformers. The
paragraph-level pre-training tasks we studied are Inverse Cloze Task (ICT),
Body First Selection (BFS), Wiki Link Prediction (WLP), and the combination of
all three. |
---|---|
DOI: | 10.48550/arxiv.2002.03932 |