SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional, and Incremental Robot Learning
Building general-purpose robots to perform a diverse range of tasks in a large variety of environments in the physical world at the human level is extremely challenging. It requires the robot learning to be sample-efficient, generalizable, compositional, and incremental. In this work, we introduce a...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Building general-purpose robots to perform a diverse range of tasks in a
large variety of environments in the physical world at the human level is
extremely challenging. It requires the robot learning to be sample-efficient,
generalizable, compositional, and incremental. In this work, we introduce a
systematic learning framework called SAGCI-system towards achieving these above
four requirements. Our system first takes the raw point clouds gathered by the
camera mounted on the robot's wrist as the inputs and produces initial modeling
of the surrounding environment represented as a file of Unified Robot
Description Format (URDF). Our system adopts a learning-augmented
differentiable simulation that loads the URDF. The robot then utilizes the
interactive perception to interact with the environment to online verify and
modify the URDF. Leveraging the differentiable simulation, we propose a
model-based learning algorithm combining object-centric and robot-centric
stages to efficiently produce policies to accomplish manipulation tasks. We
apply our system to perform articulated object manipulation tasks, both in the
simulation and the real world. Extensive experiments demonstrate the
effectiveness of our proposed learning framework. Supplemental materials and
videos are available on https://sites.google.com/view/egci. |
---|---|
DOI: | 10.48550/arxiv.2111.14693 |