FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"
Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust. Despite advancements on standard benchmarks, faithfulness hal...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Ensuring faithfulness to context in large language models (LLMs) and
retrieval-augmented generation (RAG) systems is crucial for reliable deployment
in real-world applications, as incorrect or unsupported information can erode
user trust. Despite advancements on standard benchmarks, faithfulness
hallucination-where models generate responses misaligned with the provided
context-remains a significant challenge. In this work, we introduce FaithEval,
a novel and comprehensive benchmark tailored to evaluate the faithfulness of
LLMs in contextual scenarios across three diverse tasks: unanswerable,
inconsistent, and counterfactual contexts. These tasks simulate real-world
challenges where retrieval mechanisms may surface incomplete, contradictory, or
fabricated information. FaithEval comprises 4.9K high-quality problems in
total, validated through a rigorous four-stage context construction and
validation framework, employing both LLM-based auto-evaluation and human
validation. Our extensive study across a wide range of open-source and
proprietary models reveals that even state-of-the-art models often struggle to
remain faithful to the given context, and that larger models do not necessarily
exhibit improved faithfulness.Project is available at:
\url{https://github.com/SalesforceAIResearch/FaithEval}. |
---|---|
DOI: | 10.48550/arxiv.2410.03727 |