UPL-SLAM: Unconstrained RGB-D SLAM with Accurate Point-Line Features for Visual Perception

In mainstream simultaneous localization and mapping (SLAM) algorithms, feature points are commonly utilized to represent image features. However, the quantity and quality of these feature points are contingent upon the environmental texture, lighting conditions, and motion speed. Although existing a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2025, p.1-1
Hauptverfasser: Sun, Xianshuai, Zhao, Yuming, Wang, Yabiao, Li, Zhigang, He, Zhen, Wang, Xiaohui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In mainstream simultaneous localization and mapping (SLAM) algorithms, feature points are commonly utilized to represent image features. However, the quantity and quality of these feature points are contingent upon the environmental texture, lighting conditions, and motion speed. Although existing algorithms enhance adaptability by extracting point-line features simultaneously, the presence of trivial short lines resulting from environmental noise and object occlusion can adversely affect system robustness. Therefore, in this study, we propose a line feature fusion strategy along with a model incorporating an adaptive length suppression parameter for line features. A new line feature residual model is defined, and the mathematical analytical form of line feature Jacobian matrix is derived in detail. Additionally, the point features are organized into a lattice structure and utilized to construct a global pointcloud map in a dedicated thread, aiming to enhance the semantic comprehension of environmental information. Finally, our algorithm is compared against state-of-the-art algorithms on the publicly available datasets TUM RGB-D and ICL-NUIM. The results demonstrate that the algorithm proposed in this paper achieves superior positioning accuracy and mapping quality, enabling robust 3D reconstruction of indoor scenes.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3524465