Learning Visual Semantic Map-Matching for Loosely Multi-Sensor Fusion Localization of Autonomous Vehicles
Precise localization is essential but also challenging for autonomous vehicles. In this article, A novel visual localization method is proposed. Specifically, a semantic local map describing the local environment is built with images sequence and wheel-inertial ego-motion results. Then the local sem...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on intelligent vehicles 2023-01, Vol.8 (1), p.358-367 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Precise localization is essential but also challenging for autonomous vehicles. In this article, A novel visual localization method is proposed. Specifically, a semantic local map describing the local environment is built with images sequence and wheel-inertial ego-motion results. Then the local semantic map is matched with the online map database for camera position estimation. The key novelty of the method lies in using a supervised neural network to simplify the map-matching problem, which avoids the complex data association and optimization processes. The network encodes the maps, infers the feature similarity, and predicts the camera position. The visual localization results are then loosely integrated with other onboard sensors by an invariant Kalman filter. We evaluate the map-matching module and the overall fusion system on scenario tests. The experiment results validate the effectiveness of the learning-based map-matching method. And the accuracy of the overall system is satisfactory, with mean absolute errors of 0.039 m and 0.167 m in the lateral and longitudinal directions, respectively. |
---|---|
ISSN: | 2379-8858 2379-8904 |
DOI: | 10.1109/TIV.2022.3173662 |