Robust Vision-Based Relative-Localization Approach Using an RGB-Depth Camera and LiDAR Sensor Fusion

This paper describes a robust vision-based relative-localization approach for a moving target based on an RGB-depth (RGB-D) camera and sensor measurements from two-dimensional (2-D) light detection and ranging (LiDAR). With the proposed approach, a target's three-dimensional (3-D) and 2-D posit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on industrial electronics (1982) 2016-06, Vol.63 (6), p.3725-3736
Hauptverfasser: Song, Haryong, Choi, Wonsub, Kim, Haedong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper describes a robust vision-based relative-localization approach for a moving target based on an RGB-depth (RGB-D) camera and sensor measurements from two-dimensional (2-D) light detection and ranging (LiDAR). With the proposed approach, a target's three-dimensional (3-D) and 2-D position information is measured with an RGB-D camera and LiDAR sensor, respectively, to find the location of a target by incorporating visual-tracking algorithms, depth information of the structured light sensor, and a low-level vision-LiDAR fusion algorithm, e.g., extrinsic calibration. To produce 2-D location measurements, both visual- and depth-tracking approaches are introduced, utilizing an adaptive color-based particle filter (ACPF) (for visual tracking) and an interacting multiple-model (IMM) estimator with intermittent observations from depth-image segmentation (for depth image tracking). The 2-D LiDAR data enhance location measurements by replacing results from both visual and depth tracking; through this procedure, multiple LiDAR location measurements for a target are generated. To deal with these multiple-location measurements, we propose a modified track-to-track fusion scheme. The proposed approach shows robust localization results, even when one of the trackers fails. The proposed approach was compared to position data from a Vicon motion-capture system as the ground truth. The results of this evaluation demonstrate the superiority and robustness of the proposed approach.
ISSN:0278-0046
1557-9948
DOI:10.1109/TIE.2016.2521346