Deep SE(3)-Equivariant Geometric Reasoning for Precise Placement Tasks
Many robot manipulation tasks can be framed as geometric reasoning tasks, where an agent must be able to precisely manipulate an object into a position that satisfies the task from a set of initial conditions. Often, task success is defined based on the relationship between two objects - for instanc...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many robot manipulation tasks can be framed as geometric reasoning tasks,
where an agent must be able to precisely manipulate an object into a position
that satisfies the task from a set of initial conditions. Often, task success
is defined based on the relationship between two objects - for instance,
hanging a mug on a rack. In such cases, the solution should be equivariant to
the initial position of the objects as well as the agent, and invariant to the
pose of the camera. This poses a challenge for learning systems which attempt
to solve this task by learning directly from high-dimensional demonstrations:
the agent must learn to be both equivariant as well as precise, which can be
challenging without any inductive biases about the problem. In this work, we
propose a method for precise relative pose prediction which is provably
SE(3)-equivariant, can be learned from only a few demonstrations, and can
generalize across variations in a class of objects. We accomplish this by
factoring the problem into learning an SE(3) invariant task-specific
representation of the scene and then interpreting this representation with
novel geometric reasoning layers which are provably SE(3) equivariant. We
demonstrate that our method can yield substantially more precise placement
predictions in simulated placement tasks than previous methods trained with the
same amount of data, and can accurately represent relative placement
relationships data collected from real-world demonstrations. Supplementary
information and videos can be found at
https://sites.google.com/view/reldist-iclr-2023. |
---|---|
DOI: | 10.48550/arxiv.2404.13478 |