Self-Supervised Goal-Conditioned Pick and Place
Robots have the capability to collect large amounts of data autonomously by interacting with objects in the world. However, it is often not obvious \emph{how} to learning from autonomously collected data without human-labeled supervision. In this work we learn pixel-wise object representations from...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Robots have the capability to collect large amounts of data autonomously by
interacting with objects in the world. However, it is often not obvious
\emph{how} to learning from autonomously collected data without human-labeled
supervision. In this work we learn pixel-wise object representations from
unsupervised pick and place data that generalize to new objects. We introduce a
novel framework for using these representations in order to predict where to
pick and where to place in order to match a goal image. Finally, we demonstrate
the utility of our approach in a simulated grasping environment. |
---|---|
DOI: | 10.48550/arxiv.2008.11466 |