Interactive Prompt‐Guided Robotic Grasping for Arbitrary Objects Based on Promptable Segment Anything Model and Force‐Closure Analysis

Grasp generation methods based on force‐closure analysis can calculate the optimal grasps for objects through their appearances. However, the limited visual perception ability makes robots difficult to directly detect the complete appearance of objects. Building predefined models is also a costly pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Advanced intelligent systems 2024-09
Hauptverfasser: Liu, Yan, Liu, Yaxin, Han, Ruiqing, Zheng, Kai, Yao, Yufeng, Zhong, Ming
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Grasp generation methods based on force‐closure analysis can calculate the optimal grasps for objects through their appearances. However, the limited visual perception ability makes robots difficult to directly detect the complete appearance of objects. Building predefined models is also a costly procedure. These reasons constrain the application of force‐closure analysis in the real world. To solve it, this article proposes an interactive robotic grasping method based on promptable segment anything model and force‐closure analysis. A human operator can mark a prompt on any object using a laser pointer. Then, the robot extracts the edge of the marked object and calculates the optimal grasp through the edge. To validate feasibility and generalizability, the grasping generation method is tested on the Cornell and Jacquard datasets and a novel benchmark test set of 36 diverse objects is constructed to conduct real‐world experiments. Furthermore, the contributions of each step are demonstrated through ablation experiments and the proposed method is tested in the occlusion scenarios. Project code and data are available at https://github.com/TonyYounger‐Eg/Anything_Grasping .
ISSN:2640-4567
2640-4567
DOI:10.1002/aisy.202400404