RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge
LLMs and AI chatbots have improved people's efficiency in various fields. However, the necessary knowledge for answering the question may be beyond the models' knowledge boundaries. To mitigate this issue, many researchers try to introduce external knowledge, such as knowledge graphs and I...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Liu, Yi Huang, Lianzhe Li, Shicheng Chen, Sishuo Zhou, Hao Meng, Fandong Zhou, Jie Sun, Xu |
description | LLMs and AI chatbots have improved people's efficiency in various fields.
However, the necessary knowledge for answering the question may be beyond the
models' knowledge boundaries. To mitigate this issue, many researchers try to
introduce external knowledge, such as knowledge graphs and Internet contents,
into LLMs for up-to-date information. However, the external information from
the Internet may include counterfactual information that will confuse the model
and lead to an incorrect response. Thus there is a pressing need for LLMs to
possess the ability to distinguish reliable information from external
knowledge. Therefore, to evaluate the ability of LLMs to discern the
reliability of external knowledge, we create a benchmark from existing
knowledge bases. Our benchmark consists of two tasks, Question Answering and
Text Generation, and for each task, we provide models with a context containing
counterfactual information. Evaluation results show that existing LLMs are
susceptible to interference from unreliable external knowledge with
counterfactual information, and simple intervention methods make limited
contributions to the alleviation of this issue. |
doi_str_mv | 10.48550/arxiv.2311.08147 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2311_08147</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2311_08147</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-11721a116ffd91e519933271066122f65ed2fd459d334579cb866ec6f9ab7a873</originalsourceid><addsrcrecordid>eNotj89OhDAYxHvxYFYfwNP2BUC-lrbU20rwT8SY3eydfNB2JWIxLej69uLqaWYuM_Mj5AqyNC-EyK4xHPvPlHGANCsgV-dku6vKTV3f0A29tb57fcfwRt0YaF0_R7ob2zlO3sZI8YC9jxOtjpMNHgdajrNfrMNumpf45MevwZqDvSBnDodoL_91RfZ31b58SOqX-8dlK0GpVAKgGCCAdM5osAK05pwpyKQExpwU1jBncqEN57lQumsLKW0nncZWYaH4iqz_ak9MzUfol-vfzS9bc2LjP3ROSDE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge</title><source>arXiv.org</source><creator>Liu, Yi ; Huang, Lianzhe ; Li, Shicheng ; Chen, Sishuo ; Zhou, Hao ; Meng, Fandong ; Zhou, Jie ; Sun, Xu</creator><creatorcontrib>Liu, Yi ; Huang, Lianzhe ; Li, Shicheng ; Chen, Sishuo ; Zhou, Hao ; Meng, Fandong ; Zhou, Jie ; Sun, Xu</creatorcontrib><description>LLMs and AI chatbots have improved people's efficiency in various fields.
However, the necessary knowledge for answering the question may be beyond the
models' knowledge boundaries. To mitigate this issue, many researchers try to
introduce external knowledge, such as knowledge graphs and Internet contents,
into LLMs for up-to-date information. However, the external information from
the Internet may include counterfactual information that will confuse the model
and lead to an incorrect response. Thus there is a pressing need for LLMs to
possess the ability to distinguish reliable information from external
knowledge. Therefore, to evaluate the ability of LLMs to discern the
reliability of external knowledge, we create a benchmark from existing
knowledge bases. Our benchmark consists of two tasks, Question Answering and
Text Generation, and for each task, we provide models with a context containing
counterfactual information. Evaluation results show that existing LLMs are
susceptible to interference from unreliable external knowledge with
counterfactual information, and simple intervention methods make limited
contributions to the alleviation of this issue.</description><identifier>DOI: 10.48550/arxiv.2311.08147</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2023-11</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2311.08147$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2311.08147$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Yi</creatorcontrib><creatorcontrib>Huang, Lianzhe</creatorcontrib><creatorcontrib>Li, Shicheng</creatorcontrib><creatorcontrib>Chen, Sishuo</creatorcontrib><creatorcontrib>Zhou, Hao</creatorcontrib><creatorcontrib>Meng, Fandong</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><creatorcontrib>Sun, Xu</creatorcontrib><title>RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge</title><description>LLMs and AI chatbots have improved people's efficiency in various fields.
However, the necessary knowledge for answering the question may be beyond the
models' knowledge boundaries. To mitigate this issue, many researchers try to
introduce external knowledge, such as knowledge graphs and Internet contents,
into LLMs for up-to-date information. However, the external information from
the Internet may include counterfactual information that will confuse the model
and lead to an incorrect response. Thus there is a pressing need for LLMs to
possess the ability to distinguish reliable information from external
knowledge. Therefore, to evaluate the ability of LLMs to discern the
reliability of external knowledge, we create a benchmark from existing
knowledge bases. Our benchmark consists of two tasks, Question Answering and
Text Generation, and for each task, we provide models with a context containing
counterfactual information. Evaluation results show that existing LLMs are
susceptible to interference from unreliable external knowledge with
counterfactual information, and simple intervention methods make limited
contributions to the alleviation of this issue.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj89OhDAYxHvxYFYfwNP2BUC-lrbU20rwT8SY3eydfNB2JWIxLej69uLqaWYuM_Mj5AqyNC-EyK4xHPvPlHGANCsgV-dku6vKTV3f0A29tb57fcfwRt0YaF0_R7ob2zlO3sZI8YC9jxOtjpMNHgdajrNfrMNumpf45MevwZqDvSBnDodoL_91RfZ31b58SOqX-8dlK0GpVAKgGCCAdM5osAK05pwpyKQExpwU1jBncqEN57lQumsLKW0nncZWYaH4iqz_ak9MzUfol-vfzS9bc2LjP3ROSDE</recordid><startdate>20231114</startdate><enddate>20231114</enddate><creator>Liu, Yi</creator><creator>Huang, Lianzhe</creator><creator>Li, Shicheng</creator><creator>Chen, Sishuo</creator><creator>Zhou, Hao</creator><creator>Meng, Fandong</creator><creator>Zhou, Jie</creator><creator>Sun, Xu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231114</creationdate><title>RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge</title><author>Liu, Yi ; Huang, Lianzhe ; Li, Shicheng ; Chen, Sishuo ; Zhou, Hao ; Meng, Fandong ; Zhou, Jie ; Sun, Xu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-11721a116ffd91e519933271066122f65ed2fd459d334579cb866ec6f9ab7a873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yi</creatorcontrib><creatorcontrib>Huang, Lianzhe</creatorcontrib><creatorcontrib>Li, Shicheng</creatorcontrib><creatorcontrib>Chen, Sishuo</creatorcontrib><creatorcontrib>Zhou, Hao</creatorcontrib><creatorcontrib>Meng, Fandong</creatorcontrib><creatorcontrib>Zhou, Jie</creatorcontrib><creatorcontrib>Sun, Xu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Yi</au><au>Huang, Lianzhe</au><au>Li, Shicheng</au><au>Chen, Sishuo</au><au>Zhou, Hao</au><au>Meng, Fandong</au><au>Zhou, Jie</au><au>Sun, Xu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge</atitle><date>2023-11-14</date><risdate>2023</risdate><abstract>LLMs and AI chatbots have improved people's efficiency in various fields.
However, the necessary knowledge for answering the question may be beyond the
models' knowledge boundaries. To mitigate this issue, many researchers try to
introduce external knowledge, such as knowledge graphs and Internet contents,
into LLMs for up-to-date information. However, the external information from
the Internet may include counterfactual information that will confuse the model
and lead to an incorrect response. Thus there is a pressing need for LLMs to
possess the ability to distinguish reliable information from external
knowledge. Therefore, to evaluate the ability of LLMs to discern the
reliability of external knowledge, we create a benchmark from existing
knowledge bases. Our benchmark consists of two tasks, Question Answering and
Text Generation, and for each task, we provide models with a context containing
counterfactual information. Evaluation results show that existing LLMs are
susceptible to interference from unreliable external knowledge with
counterfactual information, and simple intervention methods make limited
contributions to the alleviation of this issue.</abstract><doi>10.48550/arxiv.2311.08147</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2311.08147 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2311_08147 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language |
title | RECALL: A Benchmark for LLMs Robustness against External Counterfactual Knowledge |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T22%3A39%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=RECALL:%20A%20Benchmark%20for%20LLMs%20Robustness%20against%20External%20Counterfactual%20Knowledge&rft.au=Liu,%20Yi&rft.date=2023-11-14&rft_id=info:doi/10.48550/arxiv.2311.08147&rft_dat=%3Carxiv_GOX%3E2311_08147%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |