Combining RGB and Points to Predict Grasping Region for Robotic Bin-Picking
This paper focuses on a robotic picking tasks in cluttered scenario. Because of the diversity of objects and clutter by placing, it is much difficult to recognize and estimate their pose before grasping. Here, we use U-net, a special Convolution Neural Networks (CNN), to combine RGB images and depth...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper focuses on a robotic picking tasks in cluttered scenario. Because
of the diversity of objects and clutter by placing, it is much difficult to
recognize and estimate their pose before grasping. Here, we use U-net, a
special Convolution Neural Networks (CNN), to combine RGB images and depth
information to predict picking region without recognition and pose estimation.
The efficiency of diverse visual input of the network were compared, including
RGB, RGB-D and RGB-Points. And we found the RGB-Points input could get a
precision of 95.74%. |
---|---|
DOI: | 10.48550/arxiv.1904.07394 |