Fusion of laser and visual data for robot motion planning and collision avoidance

In this paper, a method for inferring scene structure information based on both laser and visual data is proposed. Common laser scanners employed in contemporary robotic systems provide accurate range measurements, but only in 2D slices of the environment. On the other hand, vision is capable of pro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Machine vision and applications 2003-12, Vol.15 (2), p.92-100
Hauptverfasser: Baltzakis, Haris, Argyros, Antonis, Trahanias, Panos
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, a method for inferring scene structure information based on both laser and visual data is proposed. Common laser scanners employed in contemporary robotic systems provide accurate range measurements, but only in 2D slices of the environment. On the other hand, vision is capable of providing dense 3D information of the environment. The proposed fusion scheme combines the accuracy of laser sensors with the broad visual fields of cameras toward extracting accurate scene structure information. Data fusion is achieved by validating 3D structure assumptions formed according to 2D range scans of the environment, through the exploitation of visual information. The proposed methodology is applied to robot motion planning and collision avoidance tasks by using a suitably modified version of the vector field histogram algorithm. Experimental results confirm the effectiveness of the proposed methodology.
ISSN:0932-8092
1432-1769
DOI:10.1007/s00138-003-0133-2