Characterizing LLM Abstention Behavior in Science QA with Context Perturbations
The correct model response in the face of uncertainty is to abstain from answering a question so as not to mislead the user. In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insufficient or incorrect context. We probe model sensit...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The correct model response in the face of uncertainty is to abstain from
answering a question so as not to mislead the user. In this work, we study the
ability of LLMs to abstain from answering context-dependent science questions
when provided insufficient or incorrect context. We probe model sensitivity in
several settings: removing gold context, replacing gold context with irrelevant
context, and providing additional context beyond what is given. In experiments
on four QA datasets with six LLMs, we show that performance varies greatly
across models, across the type of context provided, and also by question type;
in particular, many LLMs seem unable to abstain from answering boolean
questions using standard QA prompts. Our analysis also highlights the
unexpected impact of abstention performance on QA task accuracy.
Counter-intuitively, in some settings, replacing gold context with irrelevant
context or adding irrelevant context to gold context can improve abstention
performance in a way that results in improvements in task performance. Our
results imply that changes are needed in QA dataset design and evaluation to
more effectively assess the correctness and downstream impacts of model
abstention. |
---|---|
DOI: | 10.48550/arxiv.2404.12452 |