Static multi-target-based auto-calibration of RGB cameras, 3D Radar, and 3D Lidar sensors

For environmental perception, autonomous vehicles and intelligent roadside infrastructure systems contain multiple sensors, i.e. radar (Radio Detection and Ranging), lidar (Light Detection and Ranging), and camera sensors with the aim to detect, classify and track multiple road users. Data from mult...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2023-08, p.1-1
Hauptverfasser: Agrawal, Shiva, Bhanderi, Savankumar, Doycheva, Kristina, Elger, Gordon
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:For environmental perception, autonomous vehicles and intelligent roadside infrastructure systems contain multiple sensors, i.e. radar (Radio Detection and Ranging), lidar (Light Detection and Ranging), and camera sensors with the aim to detect, classify and track multiple road users. Data from multiple sensors are fused together to enhance the perception quality of the sensor system because each sensor has strengths and weaknesses, e.g. resolution, distance measurement, and dependency on weather conditions. For data fusion, it is necessary to transform the data from the different sensor coordinates to a common coordinate frame. This process is referred to as multi-sensor calibration and is a challenging task, which is mostly performed manually. This paper introduces a new method for auto-calibrating three-dimensional (3D) radar, 3D lidar, and red-green-blue (RGB) mono-camera sensors using a static multi-target-based system. The proposed method can be used with sensors operating at different frame rates without time synchronization. Furthermore, the described static multi-target system is cost-effective, easy to build, and applicable for short to long-distance calibration. The experimental results for multiple sets of measurements show good results with projection errors measured as maximum root mean square error (RMSE) of (u,v) = (2.4, 1.8) pixels for lidar-to-camera calibration, RMSE of (u,v) = (2.2, 3.0) pixels for 3D radar-to-camera calibration and RMSE of (x,y,z) = (2.6, 2.7, 14.0) centimeters for 3D radar-to-lidar calibration.
ISSN:1530-437X
DOI:10.1109/JSEN.2023.3300957