Tightly Coupled Integration of GNSS and Vision SLAM Using 10-DoF Optimization on Manifold
Vision navigation technique, especially the vision-based simultaneous localization and mapping (V-SLAM), plays a critical role in robotic navigation. As a relative positioning technique, V-SLAM often suffers from drift and scale uncertainty problems which incur bias increasing over time. To overcome...
Gespeichert in:
Veröffentlicht in: | IEEE sensors journal 2019-12, Vol.19 (24), p.12105-12117 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision navigation technique, especially the vision-based simultaneous localization and mapping (V-SLAM), plays a critical role in robotic navigation. As a relative positioning technique, V-SLAM often suffers from drift and scale uncertainty problems which incur bias increasing over time. To overcome these drawbacks and to improve the robustness and accuracy of localization, an effective way is to fuse global navigation satellite system (GNSS) with V-SLAM. In this paper, we propose a novel GNSS and SLAM fusion algorithm, which provides ego-motion estimation through tightly coupling GNSS pseudo-range measurements and camera feature points. It first decomposes the pose state into basic motion vectors, based on which an asynchronous tracking is performed. Then, a 10-DoF joint-optimization formulation on manifold is proposed to achieve tight fusion of the raw measurement from camera and GNSS. Finally, this formulation is solved to calculate the ego-motion state. The proposed algorithm is verified on an autonomous ground vehicle in two typical environments. The results demonstrated that, the new algorithm can amend the bias in vision SLAM and constrain the GNSS solution, which achieves a better localization result than the traditional methods. |
---|---|
ISSN: | 1530-437X 1558-1748 |
DOI: | 10.1109/JSEN.2019.2935387 |