Long-Term Visual Simultaneous Localization and Mapping: Using a Bayesian Persistence Filter-Based Global Map Prediction

With the rapidly growing demand for accurate localization in real-world environments, visual simultaneous localization and mapping (SLAM) has received significant attention in recent years. However, those existing methods still suffer from the degradation of localization accuracy in long-term changi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE robotics & automation magazine 2023-03, Vol.30 (1), p.2-15
Hauptverfasser: Deng, Tianchen, Xie, Hongle, Wang, Jingchuan, Chen, Weidong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the rapidly growing demand for accurate localization in real-world environments, visual simultaneous localization and mapping (SLAM) has received significant attention in recent years. However, those existing methods still suffer from the degradation of localization accuracy in long-term changing environments. To address these problems, we propose a novel long-term SLAM system with map prediction and dynamics removal. First, a visual point-cloud matching algorithm is designed to efficiently fuse 2D pixel information and 3D voxel information. Second, each map point is classified into three types: static, semistatic, and dynamic based on the Bayesian persistence filter (BPF). Then we remove the dynamic map points to eliminate the influence of those map points. We can obtain a global predicted map by modeling the time series of semistatic map points. Finally, we incorporate the predicted global map into a state-of-the-art SLAM method, achieving an efficient visual SLAM system for long-term, dynamic environments. Extensive experiments are carried out on a wheelchair robot in an indoor environment over several months. The results demonstrate that our method has better map prediction accuracy and achieves more robust localization performance.
ISSN:1070-9932
1558-223X
DOI:10.1109/MRA.2022.3228492