Camera-LiDAR Extrinsic Calibration Using Constrained Optimization With Circle Placement
Monocular camera-LiDAR data fusion has demonstrated remarkable environmental perception capabilities in various fields. The success of data fusion relies on the accurate matching of correspondence features from images and point clouds. In this letter, we propose a target-based Camera-LiDAR extrinsic...
Gespeichert in:
Veröffentlicht in: | IEEE robotics and automation letters 2025-02, Vol.10 (2), p.883-890 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Monocular camera-LiDAR data fusion has demonstrated remarkable environmental perception capabilities in various fields. The success of data fusion relies on the accurate matching of correspondence features from images and point clouds. In this letter, we propose a target-based Camera-LiDAR extrinsic calibration by matching correspondences in both data. Specifically, to extract accurate features from the point cloud, we propose a novel method that estimates the circle centers by optimizing the probability distribution from the initial position. This optimization involves generating the probability distribution of circle centers from circle edge points and using the Lagrangian multiplier method to estimate the optimal positions of the circle centers. We conduct two types of experiments: simulations for quantitative results and real system evaluations for qualitative assessment. Our method demonstrates a \mathbf{21\%} improvement in simulation calibration performance for 20 target poses with LiDAR noise of \mathbf{\text{0.03}\,m} compared to existing methods, and also shows high visual quality in reprojecting point cloud onto images in real-world scenarios. |
---|---|
ISSN: | 2377-3766 |
DOI: | 10.1109/LRA.2024.3512253 |