IRLab@iKAT24: Learned Sparse Retrieval with Multi-aspect LLM Query Generation for Conversational Search
The Interactive Knowledge Assistant Track (iKAT) 2024 focuses on advancing conversational assistants, able to adapt their interaction and responses from personalized user knowledge. The track incorporates a Personal Textual Knowledge Base (PTKB) alongside Conversational AI tasks, such as passage ran...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Interactive Knowledge Assistant Track (iKAT) 2024 focuses on advancing
conversational assistants, able to adapt their interaction and responses from
personalized user knowledge. The track incorporates a Personal Textual
Knowledge Base (PTKB) alongside Conversational AI tasks, such as passage
ranking and response generation. Query Rewrite being an effective approach for
resolving conversational context, we explore Large Language Models (LLMs), as
query rewriters. Specifically, our submitted runs explore multi-aspect query
generation using the MQ4CS framework, which we further enhance with Learned
Sparse Retrieval via the SPLADE architecture, coupled with robust cross-encoder
models. We also propose an alternative to the previous interleaving strategy,
aggregating multiple aspects during the reranking phase. Our findings indicate
that multi-aspect query generation is effective in enhancing performance when
integrated with advanced retrieval and reranking models. Our results also lead
the way for better personalization in Conversational Search, relying on LLMs to
integrate personalization within query rewrite, and outperforming human rewrite
performance. |
---|---|
DOI: | 10.48550/arxiv.2411.14739 |