Teaching Robots to Do Object Assembly using Multi-modal 3D Vision
The motivation of this paper is to develop a smart system using multi-modal vision for next-generation mechanical assembly. It includes two phases where in the first phase human beings teach the assembly structure to a robot and in the second phase the robot finds objects and grasps and assembles th...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The motivation of this paper is to develop a smart system using multi-modal
vision for next-generation mechanical assembly. It includes two phases where in
the first phase human beings teach the assembly structure to a robot and in the
second phase the robot finds objects and grasps and assembles them using AI
planning. The crucial part of the system is the precision of 3D visual
detection and the paper presents multi-modal approaches to meet the
requirements: AR markers are used in the teaching phase since human beings can
actively control the process. Point cloud matching and geometric constraints
are used in the robot execution phase to avoid unexpected noises. Experiments
are performed to examine the precision and correctness of the approaches. The
study is practical: The developed approaches are integrated with graph
model-based motion planning, implemented on an industrial robots and applicable
to real-world scenarios. |
---|---|
DOI: | 10.48550/arxiv.1601.06473 |