Rapid Implementation of an Advanced Visual Localization System for Mobile Robot Navigation
The Global Positioning System (GPS) is the most widely used positioning system for outdoor localization and navigation. However, GPS signals are not always available, especially in indoor or urban canyon environments. As such, alternative positioning systems capable of operating in GPSdenied environ...
Gespeichert in:
Veröffentlicht in: | Engineering letters 2024-07, Vol.32 (7), p.1545 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The Global Positioning System (GPS) is the most widely used positioning system for outdoor localization and navigation. However, GPS signals are not always available, especially in indoor or urban canyon environments. As such, alternative positioning systems capable of operating in GPSdenied environments are essential. This paper proposes a novel visual positioning system that combines Red-Green-Blue Depth (RGBD) map construction, semantic graph-based image matching, and dynamic localization and tracking. Our system utilizes a multi-modal sensor consisting of LiDAR and camera to acquire data and build a map library of RGBD images with sparse depth information. To initialize localization, we construct semantic graphs from observed and map images and construct image descriptors for matching to obtain approximate positions. To achieve continuous localization, we combine visual odometry with the ASpanFormer image matching method, and correct pose estimates based on the map library to reduce cumulative errors. We also dynamically update the map library in response to environmental changes. The results show that our system achieves superior accuracy and robustness in challenging scenarios, such as lighting variations, dynamic objects, and similar scene distributions. |
---|---|
ISSN: | 1816-093X 1816-0948 |