Aligning Language Models to Explicitly Handle Ambiguity
In interactions between users and language model agents, user utterances frequently exhibit ellipsis (omission of words or phrases) or imprecision (lack of exactness) to prioritize efficiency. This can lead to varying interpretations of the same input based on different assumptions or background kno...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In interactions between users and language model agents, user utterances
frequently exhibit ellipsis (omission of words or phrases) or imprecision (lack
of exactness) to prioritize efficiency. This can lead to varying
interpretations of the same input based on different assumptions or background
knowledge. It is thus crucial for agents to adeptly handle the inherent
ambiguity in queries to ensure reliability. However, even state-of-the-art
large language models (LLMs) still face challenges in such scenarios, primarily
due to the following hurdles: (1) LLMs are not explicitly trained to deal with
ambiguous utterances; (2) the degree of ambiguity perceived by the LLMs may
vary depending on the possessed knowledge. To address these issues, we propose
Alignment with Perceived Ambiguity (APA), a novel pipeline that aligns LLMs to
manage ambiguous queries by leveraging their own assessment of ambiguity (i.e.,
perceived ambiguity). Experimental results on question-answering datasets
demonstrate that APA empowers LLMs to explicitly detect and manage ambiguous
queries while retaining the ability to answer clear questions. Furthermore, our
finding proves that APA excels beyond training with gold-standard labels,
especially in out-of-distribution scenarios. The data and code are available at
https://github.com/heyjoonkim/APA. |
---|---|
DOI: | 10.48550/arxiv.2404.11972 |