AnyGrasp: Robust and Efficient Grasp Perception in Spatial and Temporal Domains
As the basis for prehensile manipulation, it is vital to enable robots to grasp as robustly as humans. Our innate grasping system is prompt, accurate, flexible, and continuous across spatial and temporal domains. Few existing methods cover all these properties for robot grasping. In this paper, we p...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As the basis for prehensile manipulation, it is vital to enable robots to
grasp as robustly as humans. Our innate grasping system is prompt, accurate,
flexible, and continuous across spatial and temporal domains. Few existing
methods cover all these properties for robot grasping. In this paper, we
propose AnyGrasp for grasp perception to enable robots these abilities using a
parallel gripper. Specifically, we develop a dense supervision strategy with
real perception and analytic labels in the spatial-temporal domain. Additional
awareness of objects' center-of-mass is incorporated into the learning process
to help improve grasping stability. Utilization of grasp correspondence across
observations enables dynamic grasp tracking. Our model can efficiently generate
accurate, 7-DoF, dense, and temporally-smooth grasp poses and works robustly
against large depth-sensing noise. Using AnyGrasp, we achieve a 93.3% success
rate when clearing bins with over 300 unseen objects, which is on par with
human subjects under controlled conditions. Over 900 mean-picks-per-hour is
reported on a single-arm system. For dynamic grasping, we demonstrate catching
swimming robot fish in the water. Our project page is at
https://graspnet.net/anygrasp.html |
---|---|
DOI: | 10.48550/arxiv.2212.08333 |