Real-Time Robotic Multigrasp Detection Using Anchor-Free Fully Convolutional Grasp Detector

Robotic grasping is essential for intelligent manufacturing. This article presents a novel anchor-free grasp detector based on fully convolutional network for detecting multiple valid grasps from RGB-D images in real time. Grasp detection is formulated as a closest horizontal or vertical rectangle r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on industrial electronics (1982) 2022-12, Vol.69 (12), p.13171-13181
Hauptverfasser: Wu, Yongxiang, Zhang, Fuhai, Fu, Yili
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Robotic grasping is essential for intelligent manufacturing. This article presents a novel anchor-free grasp detector based on fully convolutional network for detecting multiple valid grasps from RGB-D images in real time. Grasp detection is formulated as a closest horizontal or vertical rectangle regression task and a grasp angle classification task. By directing predicting grasps at feature points, our method eliminates the predefined anchors that commonly used in prior methods, and thus anchor-related hyperparameters and complex computations are avoided. For suppressing ambiguous and low-quality training samples, a new sample assignment strategy that combines center-sampling and regression weights is proposed. Our method achieves a state-of-the-art accuracy of 99.4% on Cornell and 96.2% on Jacquard dataset, and real-time speed of 104 frames per second, with approximately 2× fewer parameters and 8× less training time compared to previous one-stage detector. Moreover, an efficient multiscale feature fusion module is integrated to improve the performance of multigrasp detection by 25%. In real-world robotic grasping of novel objects, our method achieves a grasp success rate of 91.3% for single object and 83.3% for multiple objects with only 26 ms used for the whole planning. The results demonstrate that our method is robust for potential industrial applications.
ISSN:0278-0046
1557-9948
DOI:10.1109/TIE.2021.3135629