Robust Task-Oriented Markerless Extrinsic Calibration for Robotic Pick-and-Place Scenarios
Camera extrinsic calibration is an important module for robotic visual tasks. A typical visual task is to use a robot and a color camera to pick an object from a variety of items and place it in a designated area. However, the noise of multi-sensor processing may have a significant impact on the res...
Gespeichert in:
Veröffentlicht in: | IEEE access 2019, Vol.7, p.127932-127942 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Camera extrinsic calibration is an important module for robotic visual tasks. A typical visual task is to use a robot and a color camera to pick an object from a variety of items and place it in a designated area. However, the noise of multi-sensor processing may have a significant impact on the results when running a full-process visual task; in addition, checkerboards are inconvenient or unavailable in pick-and-place scenarios. In this paper, we propose and develop a task-oriented markerless hand-eye calibration method by using nonlinear iterative optimization. The optimization employs a transfer error to construct cost function, which is necessarily observable and estimable for visual tasks. Our method does not require a calibration checkerboard and only uses an available saliency object in the task scene as a marker. It provides an end-to-end method that converts extrinsic parameters into variables that are optimized with the cost function, making it not only robust to sensors with noise but also able to meet the requirements of the tasks' reconstruction accuracy. Different from classic methods detecting a known size calibration pattern, the input of our method is a batch of image points and the corresponding world points. The results show that the accuracy of our extrinsic calibration method is sufficient for the robot's pick-and-place tasks. The experiments of the competition demonstrate that our method is definitely effective in the desired tasks of vision-in-the-loop automatic pick-and-place scenarios. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2019.2913421 |