Do RAG Systems Cover What Matters? Evaluating and Optimizing Responses with Sub-Question Coverage
Evaluating retrieval-augmented generation (RAG) systems remains challenging, particularly for open-ended questions that lack definitive answers and require coverage of multiple sub-topics. In this paper, we introduce a novel evaluation framework based on sub-question coverage, which measures how wel...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Evaluating retrieval-augmented generation (RAG) systems remains challenging,
particularly for open-ended questions that lack definitive answers and require
coverage of multiple sub-topics. In this paper, we introduce a novel evaluation
framework based on sub-question coverage, which measures how well a RAG system
addresses different facets of a question. We propose decomposing questions into
sub-questions and classifying them into three types -- core, background, and
follow-up -- to reflect their roles and importance. Using this categorization,
we introduce a fine-grained evaluation protocol that provides insights into the
retrieval and generation characteristics of RAG systems, including three
commercial generative answer engines: You.com, Perplexity AI, and Bing Chat.
Interestingly, we find that while all answer engines cover core sub-questions
more often than background or follow-up ones, they still miss around 50% of
core sub-questions, revealing clear opportunities for improvement. Further,
sub-question coverage metrics prove effective for ranking responses, achieving
82% accuracy compared to human preference annotations. Lastly, we also
demonstrate that leveraging core sub-questions enhances both retrieval and
answer generation in a RAG system, resulting in a 74% win rate over the
baseline that lacks sub-questions. |
---|---|
DOI: | 10.48550/arxiv.2410.15531 |