Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity
Large language models (LLMs) exhibit advanced reasoning skills, enabling robots to comprehend natural language instructions and strategically plan high-level actions through proper grounding. However, LLM hallucination may result in robots confidently executing plans that are misaligned with user go...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) exhibit advanced reasoning skills, enabling
robots to comprehend natural language instructions and strategically plan
high-level actions through proper grounding. However, LLM hallucination may
result in robots confidently executing plans that are misaligned with user
goals or, in extreme cases, unsafe. Additionally, inherent ambiguity in natural
language instructions can induce task uncertainty, particularly in situations
where multiple valid options exist. To address this issue, LLMs must identify
such uncertainty and proactively seek clarification. This paper explores the
concept of introspective planning as a systematic method for guiding LLMs in
forming uncertainty--aware plans for robotic task execution without the need
for fine-tuning. We investigate uncertainty quantification in task-level robot
planning and demonstrate that introspection significantly improves both success
rates and safety compared to state-of-the-art LLM-based planning approaches.
Furthermore, we assess the effectiveness of introspective planning in
conjunction with conformal prediction, revealing that this combination yields
tighter confidence bounds, thereby maintaining statistical success guarantees
with fewer superfluous user clarification queries. Code is available at
https://github.com/kevinliang888/IntroPlan. |
---|---|
DOI: | 10.48550/arxiv.2402.06529 |