New Monte Carlo Localization Using Deep Initialization: A Three-Dimensional LiDAR and a Camera Fusion Approach

Fast and accurate global localization of autonomous ground vehicles is often required in indoor environments and GPS-shaded areas. Typically, with regard to global localization problem, the entire environment should be observed for a long time to converge. To overcome this limitation, a new initiali...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.74485-74496
Hauptverfasser: Jo, Hyunggi, Kim, Euntai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Fast and accurate global localization of autonomous ground vehicles is often required in indoor environments and GPS-shaded areas. Typically, with regard to global localization problem, the entire environment should be observed for a long time to converge. To overcome this limitation, a new initialization method called deep initialization is proposed and it is applied to Monte Carlo localization (MCL). The proposed method is based on the combination of a three-dimensional (3D) light detection and ranging (LiDAR) and a camera. Using a camera, pose regression based on a deep convolutional neural network (CNN) is conducted to initialize particles of MCL. Particles are sampled from the tangent space to a manifold structure of the group of rigid motion. Using a 3D LiDAR as a sensor, a particle filter is applied to estimate the sensor pose. Furthermore, we propose a re-localization method for performing initialization whenever a localization failure or the situation of robot kidnapping is detected. Either the localization failure or the kidnapping is detected by combining the outputs from a camera and 3D LiDAR. Finally, the proposed method is applied to a mobile robot platform to prove the method's effectiveness in terms of both the localization accuracy and time consumed for estimating the pose correctly.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.2988464