Evaluation of Attribution Bias in Retrieval-Augmented Large Language Models
Attributing answers to source documents is an approach used to enhance the verifiability of a model's output in retrieval augmented generation (RAG). Prior work has mainly focused on improving and evaluating the attribution quality of large language models (LLMs) in RAG, but this may come at th...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Attributing answers to source documents is an approach used to enhance the
verifiability of a model's output in retrieval augmented generation (RAG).
Prior work has mainly focused on improving and evaluating the attribution
quality of large language models (LLMs) in RAG, but this may come at the
expense of inducing biases in the attribution of answers. We define and examine
two aspects in the evaluation of LLMs in RAG pipelines, namely attribution
sensitivity and bias with respect to authorship information. We explicitly
inform an LLM about the authors of source documents, instruct it to attribute
its answers, and analyze (i) how sensitive the LLM's output is to the author of
source documents, and (ii) whether the LLM exhibits a bias towards
human-written or AI-generated source documents. We design an experimental setup
in which we use counterfactual evaluation to study three LLMs in terms of their
attribution sensitivity and bias in RAG pipelines. Our results show that adding
authorship information to source documents can significantly change the
attribution quality of LLMs by 3% to 18%. Moreover, we show that LLMs can have
an attribution bias towards explicit human authorship, which can serve as a
competing hypothesis for findings of prior work that shows that LLM-generated
content may be preferred over human-written contents. Our findings indicate
that metadata of source documents can influence LLMs' trust, and how they
attribute their answers. Furthermore, our research highlights attribution bias
and sensitivity as a novel aspect of brittleness in LLMs. |
---|---|
DOI: | 10.48550/arxiv.2410.12380 |