GDO-SLAM: Visual-Based Ground-Aware Decoupling Optimized SLAM for UGV in Outdoor Environments

Due to the homogeneity of the ground in outdoor scenes, i.e., self-similar textures, it is prone to cause inaccurate or even incorrect match of ground features. This mismatch inevitably introduces additional errors when calculating reprojection function, which in turn degrades the accuracy of simult...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2024-11, Vol.24 (22), p.37218-37228
Hauptverfasser: Wu, Chu, Li, Xu, Kong, Dong, Hu, Yue, Ni, Peizhou
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Due to the homogeneity of the ground in outdoor scenes, i.e., self-similar textures, it is prone to cause inaccurate or even incorrect match of ground features. This mismatch inevitably introduces additional errors when calculating reprojection function, which in turn degrades the accuracy of simultaneous localization and mapping (SLAM). In this article, we propose a ground-aware decoupled optimized SLAM, called GDO-SLAM, which is essentially a pruning semantics-guided SLAM where a custom ground decoupling optimization module is introduced in the tracking and local mapping threads based on ORB-SLAM2. Essentially, the optimization module is a decoupling constraint that adds the weights of vertical observations of ground features and reduces the weights of horizontal observations in the reprojection error function. Specifically, we design a novel ground segmentation network that achieves an optimal balance between accuracy and real-time performance, and verify its ground category IoU of 98.6% on the urban landscape dataset. Extensive experiments on both the public KITTI dataset and our self-collected dataset demonstrate that our proposed ground-aware decoupling optimized SLAM (GDO-SLAM) outperforms the representative baseline ORB-SLAM2 in terms of translation and rotation accuracy by 7.5% and 8.3%, respectively.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2024.3452114