Uncertainty Quantification for In-Context Learning of Large Language Models
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs) and revolutionized various fields by providing a few task-relevant demonstrations in the prompt. However, trustworthy issues with LLM's response, such as hallucination, have also been actively discussed....
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In-context learning has emerged as a groundbreaking ability of Large Language
Models (LLMs) and revolutionized various fields by providing a few
task-relevant demonstrations in the prompt. However, trustworthy issues with
LLM's response, such as hallucination, have also been actively discussed.
Existing works have been devoted to quantifying the uncertainty in LLM's
response, but they often overlook the complex nature of LLMs and the uniqueness
of in-context learning. In this work, we delve into the predictive uncertainty
of LLMs associated with in-context learning, highlighting that such
uncertainties may stem from both the provided demonstrations (aleatoric
uncertainty) and ambiguities tied to the model's configurations (epistemic
uncertainty). We propose a novel formulation and corresponding estimation
method to quantify both types of uncertainties. The proposed method offers an
unsupervised way to understand the prediction of in-context learning in a
plug-and-play fashion. Extensive experiments are conducted to demonstrate the
effectiveness of the decomposition. The code and data are available at:
https://github.com/lingchen0331/UQ_ICL. |
---|---|
DOI: | 10.48550/arxiv.2402.10189 |