Robust Extrinsic Self-Calibration of Camera and Solid State LiDAR

This work proposes an extrinsic calibration approach designed for the alignment of a monocular camera with a prism-spinning solid-state LiDAR. Challenges arise due to the absence of adjacent laser rings, which are essential for the detection of line or plane features, in solid-state LiDAR systems. A...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of intelligent & robotic systems 2023-12, Vol.109 (4), p.81, Article 81
Hauptverfasser: Liu, Jiahui, Zhan, Xingqun, Chi, Cheng, Zhang, Xin, Zhai, Chuanrun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This work proposes an extrinsic calibration approach designed for the alignment of a monocular camera with a prism-spinning solid-state LiDAR. Challenges arise due to the absence of adjacent laser rings, which are essential for the detection of line or plane features, in solid-state LiDAR systems. Additionally, the existence of a distinct type of outlier, designated as ‘vacant points’, complicates the task of feature extraction, particularly those reliant on depth variation. In contrast to existing methods that leverage reflectivity variation in depth-continuous measurements to circumvent this issue, we use depth discontinuous measurements to retain more valid features by efficient removal of the vacant points. The detected 3D corners thus contain more robust a priori information than usual which, together with the 2D corners detected by camera and constrained by our proposed rules, produce accurate extrinsic estimates. The efficacy of our algorithm is thoroughly evaluated through real-world field experiments, encompassing both qualitative and quantitative performance assessments. The results show its superiority over existing algorithms. Moreover, robustness tests demonstrate the algorithm’s resilience, particularly in feature-barren outdoor environments. The code is available on GitHub.
ISSN:0921-0296
1573-0409
DOI:10.1007/s10846-023-02015-w