Render and Diffuse: Aligning Image and Action Spaces for Diffusion-based Behaviour Cloning
In the field of Robot Learning, the complex mapping between high-dimensional observations such as RGB images and low-level robotic actions, two inherently very different spaces, constitutes a complex learning problem, especially with limited amounts of data. In this work, we introduce Render and Dif...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the field of Robot Learning, the complex mapping between high-dimensional
observations such as RGB images and low-level robotic actions, two inherently
very different spaces, constitutes a complex learning problem, especially with
limited amounts of data. In this work, we introduce Render and Diffuse (R&D) a
method that unifies low-level robot actions and RGB observations within the
image space using virtual renders of the 3D model of the robot. Using this
joint observation-action representation it computes low-level robot actions
using a learnt diffusion process that iteratively updates the virtual renders
of the robot. This space unification simplifies the learning problem and
introduces inductive biases that are crucial for sample efficiency and spatial
generalisation. We thoroughly evaluate several variants of R&D in simulation
and showcase their applicability on six everyday tasks in the real world. Our
results show that R&D exhibits strong spatial generalisation capabilities and
is more sample efficient than more common image-to-action methods. |
---|---|
DOI: | 10.48550/arxiv.2405.18196 |