In-Context Learning Enables Robot Action Prediction in LLMs
Recently, Large Language Models (LLMs) have achieved remarkable success using in-context learning (ICL) in the language domain. However, leveraging the ICL capabilities within LLMs to directly predict robot actions remains largely unexplored. In this paper, we introduce RoboPrompt, a framework that...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently, Large Language Models (LLMs) have achieved remarkable success using
in-context learning (ICL) in the language domain. However, leveraging the ICL
capabilities within LLMs to directly predict robot actions remains largely
unexplored. In this paper, we introduce RoboPrompt, a framework that enables
off-the-shelf text-only LLMs to directly predict robot actions through ICL
without training. Our approach first heuristically identifies keyframes that
capture important moments from an episode. Next, we extract end-effector
actions from these keyframes as well as the estimated initial object poses, and
both are converted into textual descriptions. Finally, we construct a
structured template to form ICL demonstrations from these textual descriptions
and a task instruction. This enables an LLM to directly predict robot actions
at test time. Through extensive experiments and analysis, RoboPrompt shows
stronger performance over zero-shot and ICL baselines in simulated and
real-world settings. |
---|---|
DOI: | 10.48550/arxiv.2410.12782 |