Neural network-based robot visual positioning for intelligent assembly

A fundamental task in robotic assembly is the pick and place operation. Generally, this operation consists of three subtasks: guiding the robot to the target and positioning the manipulator in an appropriate pose, picking up the object, and moving the object to a new location. In situations where th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of intelligent manufacturing 2004-04, Vol.15 (2), p.219-231
Hauptverfasser: Ramachandram, Dhanesh, Rajeswari, Mandava
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A fundamental task in robotic assembly is the pick and place operation. Generally, this operation consists of three subtasks: guiding the robot to the target and positioning the manipulator in an appropriate pose, picking up the object, and moving the object to a new location. In situations where the pose of the target may vary in the workspace, sensory feedback becomes indispensable to guide the robot to the object. Ideally, local image features must be clearly visible and un-occluded in multiple views of the object. In reality, this may not be always the case. We present a visual positioning system that addresses feature extraction issues for a class of objects that have smooth or curved surfaces. In this work, the visual sensor consists of an arm mounted camera and a grid pattern projector that produces images with local surface description of the target. The projected pattern is always visible in the image and it is sensitive to variations in the object's pose. A set of low-order geometric moments globally characterizes the observed pattern, eliminating the need for feature localization and overcoming the point correspondence problem. A neural network then learns the complex relationship between the robot's pose displacements and the observed variations in the image features. After training, visual feedback guides the robot to the target from any arbitrary location in the workspace.
ISSN:0956-5515
1572-8145
DOI:10.1023/B:JIMS.0000018034.76366.b8