A Multisensor Fusion With Automatic Vision-LiDAR Calibration Based on Factor Graph Joint Optimization for SLAM
Combining multiple sensors for environment sensing and self-positioning is significant for automatic driving. This article proposes a novel simultaneous localization and mapping (SLAM) system framework that integrates the information of multiple sensors including camera, light detection and ranging...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on instrumentation and measurement 2023, Vol.72, p.1-9 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Combining multiple sensors for environment sensing and self-positioning is significant for automatic driving. This article proposes a novel simultaneous localization and mapping (SLAM) system framework that integrates the information of multiple sensors including camera, light detection and ranging (LiDAR), inertial measurement unit (IMU), and global positioning system (GPS) based on vision-LiDAR calibration. Different sensors are fused in a tightly coupled manner and finally optimized by a factor graph. The automatic vision-LiDAR calibration (AVLC) is proposed in this article to reduce the error caused by the unexpected change in the sensor. Further, the semantic map is established by the target detection module, which provides convenience for navigation and obstacle avoidance. The proposed algorithm uses the Complex-YOLO for 3-D object recognition and then combines the recognition results with the semi-dense point cloud map generated by the multisensor fusion positioning algorithm with AVLC to achieve the purpose of enriching map information. Extensive experiments on multiple datasets show that the proposed algorithm has higher accuracy and robustness than other existing algorithms. |
---|---|
ISSN: | 0018-9456 1557-9662 |
DOI: | 10.1109/TIM.2023.3323964 |