Uncertainty Quantification for In-Context Learning of Large Language Models

In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM's response, such as hallucination, have also been actively discussed....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ling, Chen, Zhao, Xujiang, Zhang, Xuchao, Cheng, Wei, Liu, Yanchi, Sun, Yiyou, Oishi, Mika, Osaki, Takao, Matsuda, Katsushi, Ji, Jie, Bai, Guangji, Zhao, Liang, Chen, Haifeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ling, Chen
Zhao, Xujiang
Zhang, Xuchao
Cheng, Wei
Liu, Yanchi
Sun, Yiyou
Oishi, Mika
Osaki, Takao
Matsuda, Katsushi
Ji, Jie
Bai, Guangji
Zhao, Liang
Chen, Haifeng
description In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM's response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM's response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model's configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ_ICL.
doi_str_mv 10.48550/arxiv.2402.10189
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_10189</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_10189</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2402_101893</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0jM0MLSw5GTwDs1LTi0qSczMK6lUCCxNzCvJTMtMTizJzM9TSMsvUvDM03XOzytJrShR8ElNLMrLzEtXyE9T8EksSk8FknnppYlAhm9-SmpOMQ8Da1piTnEqL5TmZpB3cw1x9tAFWxtfUJSZm1hUGQ-yPh5svTFhFQC8bDqt</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Uncertainty Quantification for In-Context Learning of Large Language Models</title><source>arXiv.org</source><creator>Ling, Chen ; Zhao, Xujiang ; Zhang, Xuchao ; Cheng, Wei ; Liu, Yanchi ; Sun, Yiyou ; Oishi, Mika ; Osaki, Takao ; Matsuda, Katsushi ; Ji, Jie ; Bai, Guangji ; Zhao, Liang ; Chen, Haifeng</creator><creatorcontrib>Ling, Chen ; Zhao, Xujiang ; Zhang, Xuchao ; Cheng, Wei ; Liu, Yanchi ; Sun, Yiyou ; Oishi, Mika ; Osaki, Takao ; Matsuda, Katsushi ; Ji, Jie ; Bai, Guangji ; Zhao, Liang ; Chen, Haifeng</creatorcontrib><description>In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM's response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM's response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model's configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ_ICL.</description><identifier>DOI: 10.48550/arxiv.2402.10189</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.10189$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.10189$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ling, Chen</creatorcontrib><creatorcontrib>Zhao, Xujiang</creatorcontrib><creatorcontrib>Zhang, Xuchao</creatorcontrib><creatorcontrib>Cheng, Wei</creatorcontrib><creatorcontrib>Liu, Yanchi</creatorcontrib><creatorcontrib>Sun, Yiyou</creatorcontrib><creatorcontrib>Oishi, Mika</creatorcontrib><creatorcontrib>Osaki, Takao</creatorcontrib><creatorcontrib>Matsuda, Katsushi</creatorcontrib><creatorcontrib>Ji, Jie</creatorcontrib><creatorcontrib>Bai, Guangji</creatorcontrib><creatorcontrib>Zhao, Liang</creatorcontrib><creatorcontrib>Chen, Haifeng</creatorcontrib><title>Uncertainty Quantification for In-Context Learning of Large Language Models</title><description>In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM's response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM's response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model's configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ_ICL.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0jM0MLSw5GTwDs1LTi0qSczMK6lUCCxNzCvJTMtMTizJzM9TSMsvUvDM03XOzytJrShR8ElNLMrLzEtXyE9T8EksSk8FknnppYlAhm9-SmpOMQ8Da1piTnEqL5TmZpB3cw1x9tAFWxtfUJSZm1hUGQ-yPh5svTFhFQC8bDqt</recordid><startdate>20240215</startdate><enddate>20240215</enddate><creator>Ling, Chen</creator><creator>Zhao, Xujiang</creator><creator>Zhang, Xuchao</creator><creator>Cheng, Wei</creator><creator>Liu, Yanchi</creator><creator>Sun, Yiyou</creator><creator>Oishi, Mika</creator><creator>Osaki, Takao</creator><creator>Matsuda, Katsushi</creator><creator>Ji, Jie</creator><creator>Bai, Guangji</creator><creator>Zhao, Liang</creator><creator>Chen, Haifeng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240215</creationdate><title>Uncertainty Quantification for In-Context Learning of Large Language Models</title><author>Ling, Chen ; Zhao, Xujiang ; Zhang, Xuchao ; Cheng, Wei ; Liu, Yanchi ; Sun, Yiyou ; Oishi, Mika ; Osaki, Takao ; Matsuda, Katsushi ; Ji, Jie ; Bai, Guangji ; Zhao, Liang ; Chen, Haifeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2402_101893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ling, Chen</creatorcontrib><creatorcontrib>Zhao, Xujiang</creatorcontrib><creatorcontrib>Zhang, Xuchao</creatorcontrib><creatorcontrib>Cheng, Wei</creatorcontrib><creatorcontrib>Liu, Yanchi</creatorcontrib><creatorcontrib>Sun, Yiyou</creatorcontrib><creatorcontrib>Oishi, Mika</creatorcontrib><creatorcontrib>Osaki, Takao</creatorcontrib><creatorcontrib>Matsuda, Katsushi</creatorcontrib><creatorcontrib>Ji, Jie</creatorcontrib><creatorcontrib>Bai, Guangji</creatorcontrib><creatorcontrib>Zhao, Liang</creatorcontrib><creatorcontrib>Chen, Haifeng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ling, Chen</au><au>Zhao, Xujiang</au><au>Zhang, Xuchao</au><au>Cheng, Wei</au><au>Liu, Yanchi</au><au>Sun, Yiyou</au><au>Oishi, Mika</au><au>Osaki, Takao</au><au>Matsuda, Katsushi</au><au>Ji, Jie</au><au>Bai, Guangji</au><au>Zhao, Liang</au><au>Chen, Haifeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Uncertainty Quantification for In-Context Learning of Large Language Models</atitle><date>2024-02-15</date><risdate>2024</risdate><abstract>In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM's response, such as hallucination, have also been actively discussed. Existing works have been devoted to quantifying the uncertainty in LLM's response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning. In this work, we delve into the predictive uncertainty of LLMs associated with in-context learning, highlighting that such uncertainties may stem from both the provided demonstrations (aleatoric uncertainty) and ambiguities tied to the model's configurations (epistemic uncertainty). We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties. The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion. Extensive experiments are conducted to demonstrate the effectiveness of the decomposition. The code and data are available at: https://github.com/lingchen0331/UQ_ICL.</abstract><doi>10.48550/arxiv.2402.10189</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.10189
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_10189
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Learning
title Uncertainty Quantification for In-Context Learning of Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T15%3A31%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Uncertainty%20Quantification%20for%20In-Context%20Learning%20of%20Large%20Language%20Models&rft.au=Ling,%20Chen&rft.date=2024-02-15&rft_id=info:doi/10.48550/arxiv.2402.10189&rft_dat=%3Carxiv_GOX%3E2402_10189%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true