Accurate Monocular Visual-Inertial SLAM Using a Map-Assisted EKF Approach

This paper presents a novel tightly coupled monocular visual-inertial simultaneous localization and mapping (SLAM) algorithm, which provides accurate and robust motion tracking at high frame rates on a standard CPU. In order to ensure the fast response of the system to the highly dynamic motion of r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.34289-34300
Hauptverfasser: Quan, Meixiang, Piao, Songhao, Tan, Minglang, Huang, Shi-Sheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents a novel tightly coupled monocular visual-inertial simultaneous localization and mapping (SLAM) algorithm, which provides accurate and robust motion tracking at high frame rates on a standard CPU. In order to ensure the fast response of the system to the highly dynamic motion of robots, we perform the visual-inertial extended Kalman filter (EKF) to track the motion. The filter becomes inconsistent due to linearization errors. It is well known that EKF-based visual-inertial odometry (VIO) will provide no-drift motion estimates with respect to the landmarks maintained in the EKF state vector. Therefore, we construct the globally consistent map and feed back the map to the EKF state vector. In a parallel thread, we construct a global map and perform a keyframe-based visual-inertial bundle adjustment to optimize the map. In addition, a loop closure detection and correction module is also performed in a parallel thread to eliminate the accumulated drift when revisiting an area. Then, we occasionally feed back the constructed global map to the EKF VIO module to update and augment the EKF state vector, thereby improving the motion tracking accuracy of the EKF VIO estimator. The system provides accurate motion tracking that is comparable to the accuracy of the optimization-based method with per-frame processing time near to the filter-based method. The superiority of the proposed algorithm is validated in experiments.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2904512