Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty
As natural language becomes the default interface for human-AI interaction, there is a need for LMs to appropriately communicate uncertainties in downstream applications. In this work, we investigate how LMs incorporate confidence in responses via natural language and how downstream users behave in...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As natural language becomes the default interface for human-AI interaction,
there is a need for LMs to appropriately communicate uncertainties in
downstream applications. In this work, we investigate how LMs incorporate
confidence in responses via natural language and how downstream users behave in
response to LM-articulated uncertainties. We examine publicly deployed models
and find that LMs are reluctant to express uncertainties when answering
questions even when they produce incorrect responses. LMs can be explicitly
prompted to express confidences, but tend to be overconfident, resulting in
high error rates (an average of 47%) among confident responses. We test the
risks of LM overconfidence by conducting human experiments and show that users
rely heavily on LM generations, whether or not they are marked by certainty.
Lastly, we investigate the preference-annotated datasets used in post training
alignment and find that humans are biased against texts with uncertainty. Our
work highlights new safety harms facing human-LM interactions and proposes
design recommendations and mitigating strategies moving forward. |
---|---|
DOI: | 10.48550/arxiv.2401.06730 |