FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"

Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust. Despite advancements on standard benchmarks, faithfulness hal...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ming, Yifei, Purushwalkam, Senthil, Pandit, Shrey, Ke, Zixuan, Nguyen, Xuan-Phi, Xiong, Caiming, Joty, Shafiq
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ming, Yifei
Purushwalkam, Senthil
Pandit, Shrey
Ke, Zixuan
Nguyen, Xuan-Phi
Xiong, Caiming
Joty, Shafiq
description Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust. Despite advancements on standard benchmarks, faithfulness hallucination-where models generate responses misaligned with the provided context-remains a significant challenge. In this work, we introduce FaithEval, a novel and comprehensive benchmark tailored to evaluate the faithfulness of LLMs in contextual scenarios across three diverse tasks: unanswerable, inconsistent, and counterfactual contexts. These tasks simulate real-world challenges where retrieval mechanisms may surface incomplete, contradictory, or fabricated information. FaithEval comprises 4.9K high-quality problems in total, validated through a rigorous four-stage context construction and validation framework, employing both LLM-based auto-evaluation and human validation. Our extensive study across a wide range of open-source and proprietary models reveals that even state-of-the-art models often struggle to remain faithful to the given context, and that larger models do not necessarily exhibit improved faithfulness.Project is available at: \url{https://github.com/SalesforceAIResearch/FaithEval}.
doi_str_mv 10.48550/arxiv.2410.03727
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_03727</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_03727</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_037273</originalsourceid><addsrcrecordid>eNqFjr0OgjAYALs4GPUBnPzCrIj8BONKIJroJIsT-RJaaFJa0xaEtxeIu9MllxuOkO3Jc8NzFHlH1D3vXD8chRfEfrwkVYbc1mmH4gIJSnipVsMdZdViReGhSirgaXGAuWOtAKsgUdLS3u4h7aiEGwMnr6dYSeAGHlhSUGykNnWDQqiPcdZkwVAYuvlxRXZZmifXw7xUvDVvUA_FtFbMa8H_4gsf6kKS</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"</title><source>arXiv.org</source><creator>Ming, Yifei ; Purushwalkam, Senthil ; Pandit, Shrey ; Ke, Zixuan ; Nguyen, Xuan-Phi ; Xiong, Caiming ; Joty, Shafiq</creator><creatorcontrib>Ming, Yifei ; Purushwalkam, Senthil ; Pandit, Shrey ; Ke, Zixuan ; Nguyen, Xuan-Phi ; Xiong, Caiming ; Joty, Shafiq</creatorcontrib><description>Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust. Despite advancements on standard benchmarks, faithfulness hallucination-where models generate responses misaligned with the provided context-remains a significant challenge. In this work, we introduce FaithEval, a novel and comprehensive benchmark tailored to evaluate the faithfulness of LLMs in contextual scenarios across three diverse tasks: unanswerable, inconsistent, and counterfactual contexts. These tasks simulate real-world challenges where retrieval mechanisms may surface incomplete, contradictory, or fabricated information. FaithEval comprises 4.9K high-quality problems in total, validated through a rigorous four-stage context construction and validation framework, employing both LLM-based auto-evaluation and human validation. Our extensive study across a wide range of open-source and proprietary models reveals that even state-of-the-art models often struggle to remain faithful to the given context, and that larger models do not necessarily exhibit improved faithfulness.Project is available at: \url{https://github.com/SalesforceAIResearch/FaithEval}.</description><identifier>DOI: 10.48550/arxiv.2410.03727</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-09</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.03727$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.03727$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ming, Yifei</creatorcontrib><creatorcontrib>Purushwalkam, Senthil</creatorcontrib><creatorcontrib>Pandit, Shrey</creatorcontrib><creatorcontrib>Ke, Zixuan</creatorcontrib><creatorcontrib>Nguyen, Xuan-Phi</creatorcontrib><creatorcontrib>Xiong, Caiming</creatorcontrib><creatorcontrib>Joty, Shafiq</creatorcontrib><title>FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"</title><description>Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust. Despite advancements on standard benchmarks, faithfulness hallucination-where models generate responses misaligned with the provided context-remains a significant challenge. In this work, we introduce FaithEval, a novel and comprehensive benchmark tailored to evaluate the faithfulness of LLMs in contextual scenarios across three diverse tasks: unanswerable, inconsistent, and counterfactual contexts. These tasks simulate real-world challenges where retrieval mechanisms may surface incomplete, contradictory, or fabricated information. FaithEval comprises 4.9K high-quality problems in total, validated through a rigorous four-stage context construction and validation framework, employing both LLM-based auto-evaluation and human validation. Our extensive study across a wide range of open-source and proprietary models reveals that even state-of-the-art models often struggle to remain faithful to the given context, and that larger models do not necessarily exhibit improved faithfulness.Project is available at: \url{https://github.com/SalesforceAIResearch/FaithEval}.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjr0OgjAYALs4GPUBnPzCrIj8BONKIJroJIsT-RJaaFJa0xaEtxeIu9MllxuOkO3Jc8NzFHlH1D3vXD8chRfEfrwkVYbc1mmH4gIJSnipVsMdZdViReGhSirgaXGAuWOtAKsgUdLS3u4h7aiEGwMnr6dYSeAGHlhSUGykNnWDQqiPcdZkwVAYuvlxRXZZmifXw7xUvDVvUA_FtFbMa8H_4gsf6kKS</recordid><startdate>20240930</startdate><enddate>20240930</enddate><creator>Ming, Yifei</creator><creator>Purushwalkam, Senthil</creator><creator>Pandit, Shrey</creator><creator>Ke, Zixuan</creator><creator>Nguyen, Xuan-Phi</creator><creator>Xiong, Caiming</creator><creator>Joty, Shafiq</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240930</creationdate><title>FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"</title><author>Ming, Yifei ; Purushwalkam, Senthil ; Pandit, Shrey ; Ke, Zixuan ; Nguyen, Xuan-Phi ; Xiong, Caiming ; Joty, Shafiq</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_037273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ming, Yifei</creatorcontrib><creatorcontrib>Purushwalkam, Senthil</creatorcontrib><creatorcontrib>Pandit, Shrey</creatorcontrib><creatorcontrib>Ke, Zixuan</creatorcontrib><creatorcontrib>Nguyen, Xuan-Phi</creatorcontrib><creatorcontrib>Xiong, Caiming</creatorcontrib><creatorcontrib>Joty, Shafiq</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ming, Yifei</au><au>Purushwalkam, Senthil</au><au>Pandit, Shrey</au><au>Ke, Zixuan</au><au>Nguyen, Xuan-Phi</au><au>Xiong, Caiming</au><au>Joty, Shafiq</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"</atitle><date>2024-09-30</date><risdate>2024</risdate><abstract>Ensuring faithfulness to context in large language models (LLMs) and retrieval-augmented generation (RAG) systems is crucial for reliable deployment in real-world applications, as incorrect or unsupported information can erode user trust. Despite advancements on standard benchmarks, faithfulness hallucination-where models generate responses misaligned with the provided context-remains a significant challenge. In this work, we introduce FaithEval, a novel and comprehensive benchmark tailored to evaluate the faithfulness of LLMs in contextual scenarios across three diverse tasks: unanswerable, inconsistent, and counterfactual contexts. These tasks simulate real-world challenges where retrieval mechanisms may surface incomplete, contradictory, or fabricated information. FaithEval comprises 4.9K high-quality problems in total, validated through a rigorous four-stage context construction and validation framework, employing both LLM-based auto-evaluation and human validation. Our extensive study across a wide range of open-source and proprietary models reveals that even state-of-the-art models often struggle to remain faithful to the given context, and that larger models do not necessarily exhibit improved faithfulness.Project is available at: \url{https://github.com/SalesforceAIResearch/FaithEval}.</abstract><doi>10.48550/arxiv.2410.03727</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.03727
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_03727
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T16%3A21%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FaithEval:%20Can%20Your%20Language%20Model%20Stay%20Faithful%20to%20Context,%20Even%20If%20%22The%20Moon%20is%20Made%20of%20Marshmallows%22&rft.au=Ming,%20Yifei&rft.date=2024-09-30&rft_id=info:doi/10.48550/arxiv.2410.03727&rft_dat=%3Carxiv_GOX%3E2410_03727%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true