Ask Optimal Questions: Aligning Large Language Models with Retriever's Preference in Conversational Search
Conversational search, unlike single-turn retrieval tasks, requires understanding the current question within a dialogue context. The common approach of rewrite-then-retrieve aims to decontextualize questions to be self-sufficient for off-the-shelf retrievers, but most existing methods produce sub-o...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Conversational search, unlike single-turn retrieval tasks, requires
understanding the current question within a dialogue context. The common
approach of rewrite-then-retrieve aims to decontextualize questions to be
self-sufficient for off-the-shelf retrievers, but most existing methods produce
sub-optimal query rewrites due to the limited ability to incorporate signals
from the retrieval results. To overcome this limitation, we present a novel
framework RetPO (Retriever's Preference Optimization), which is designed to
optimize a language model (LM) for reformulating search queries in line with
the preferences of the target retrieval systems. The process begins by
prompting a large LM to produce various potential rewrites and then collects
retrieval performance for these rewrites as the retrievers' preferences.
Through the process, we construct a large-scale dataset called RF collection,
containing Retrievers' Feedback on over 410K query rewrites across 12K
conversations. Furthermore, we fine-tune a smaller LM using this dataset to
align it with the retrievers' preferences as feedback. The resulting model
achieves state-of-the-art performance on two recent conversational search
benchmarks, significantly outperforming existing baselines, including GPT-3.5. |
---|---|
DOI: | 10.48550/arxiv.2402.11827 |