Scaling Manipulation Learning with Visual Kinematic Chain Prediction
Learning general-purpose models from diverse datasets has achieved great success in machine learning. In robotics, however, existing methods in multi-task learning are typically constrained to a single robot and workspace, while recent work such as RT-X requires a non-trivial action normalization pr...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning general-purpose models from diverse datasets has achieved great
success in machine learning. In robotics, however, existing methods in
multi-task learning are typically constrained to a single robot and workspace,
while recent work such as RT-X requires a non-trivial action normalization
procedure to manually bridge the gap between different action spaces in diverse
environments. In this paper, we propose the visual kinematics chain as a
precise and universal representation of quasi-static actions for robot learning
over diverse environments, which requires no manual adjustment since the visual
kinematic chains can be automatically obtained from the robot's model and
camera parameters. We propose the Visual Kinematics Transformer (VKT), a
convolution-free architecture that supports an arbitrary number of camera
viewpoints, and that is trained with a single objective of forecasting
kinematic structures through optimal point-set matching. We demonstrate the
superior performance of VKT over BC transformers as a general agent on Calvin,
RLBench, Open-X, and real robot manipulation tasks. Video demonstrations can be
found at https://mlzxy.github.io/visual-kinetic-chain. |
---|---|
DOI: | 10.48550/arxiv.2406.07837 |