Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras Using Adaptive Voxelization

Determining the extrinsic parameter between multiple light detection and rangings (LiDARs) and cameras is essential for autonomous robots, especially for solid-state LiDARs, where each LiDAR unit has a very small field-of-view (FoV), and multiple units are often used collectively. The majority of ex...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-12
Hauptverfasser: Liu, Xiyuan, Yuan, Chongjian, Zhang, Fu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Determining the extrinsic parameter between multiple light detection and rangings (LiDARs) and cameras is essential for autonomous robots, especially for solid-state LiDARs, where each LiDAR unit has a very small field-of-view (FoV), and multiple units are often used collectively. The majority of extrinsic calibration methods are proposed for 360° mechanical spinning LiDARs where the FoV overlap with other LiDAR or camera sensors is assumed. A few research works have been focused on the calibration of small FoV LiDARs and cameras nor on the improvement of the calibration speed. In this work, we consider the problem of extrinsic calibration among small FoV LiDARs, and cameras, with the aim to shorten the total calibration time and further improve the calibration precision. We first implement an adaptive voxelization technique in the extraction and matching of LiDAR feature points. Such a process could avoid the redundant creation of k -d trees in LiDAR extrinsic calibration and extract LiDAR feature points in a more reliable and fast manner than existing methods. We then formulate the multiple LiDAR extrinsic calibration into a LiDAR bundle adjustment (BA) problem. By deriving the cost function up to second order, the solving time and precision of the nonlinear least square problem are further boosted. Our proposed method has been verified on data collected in four targetless scenes and under two types of solid-state LiDARs with a completely different scanning pattern, density, and FoV. The robustness of our work has also been validated under eight initial setups, with each setup containing 100 independent trials. Compared with the state-of-the-art methods, our work has increased the calibration speed 15 times for LiDAR-LiDAR extrinsic calibration (averaged result from 100 independent trials) and 1.5 times for LiDAR-camera extrinsic calibration (averaged result from 50 independent trials) while remaining accurate. To benefit the robotics community, we have also open-sourced our implementation code on GitHub.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2022.3176889