Grounding Language Plans in Demonstrations Through Counterfactual Perturbations
In International Conference on Learning Representations (ICLR) (2024) Grounding the common-sense reasoning of Large Language Models (LLMs) in physical domains remains a pivotal yet unsolved problem for embodied AI. Whereas prior works have focused on leveraging LLMs directly for planning in symbolic...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In International Conference on Learning Representations (ICLR)
(2024) Grounding the common-sense reasoning of Large Language Models (LLMs) in
physical domains remains a pivotal yet unsolved problem for embodied AI.
Whereas prior works have focused on leveraging LLMs directly for planning in
symbolic spaces, this work uses LLMs to guide the search of task structures and
constraints implicit in multi-step demonstrations. Specifically, we borrow from
manipulation planning literature the concept of mode families, which group
robot configurations by specific motion constraints, to serve as an abstraction
layer between the high-level language representations of an LLM and the
low-level physical trajectories of a robot. By replaying a few human
demonstrations with synthetic perturbations, we generate coverage over the
demonstrations' state space with additional successful executions as well as
counterfactuals that fail the task. Our explanation-based learning framework
trains an end-to-end differentiable neural network to predict successful
trajectories from failures and as a by-product learns classifiers that ground
low-level states and images in mode families without dense labeling. The
learned grounding classifiers can further be used to translate language plans
into reactive policies in the physical domain in an interpretable manner. We
show our approach improves the interpretability and reactivity of imitation
learning through 2D navigation and simulated and real robot manipulation tasks.
Website: https://yanweiw.github.io/glide |
---|---|
DOI: | 10.48550/arxiv.2403.17124 |