Improved LiDAR-camera Calibration Based on Hand-eye Model under Motion Limitation

The extrinsic transformation between LiDAR and camera is 6-DoF, but the motion of robot is mainly 3-DoF. The classical hand-eye calibration method used in unmanned aerial vehicles or handheld devices requires the sensors to translate and rotate in each direction, which is obviously not applicable in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2023-08, Vol.23 (16), p.1-1
Hauptverfasser: Li, Jianzhong, Hu, Manjiang, Liu, Shuo, Chang, Dengxiang, Zhou, Yunshui, Qin, Xiaohui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The extrinsic transformation between LiDAR and camera is 6-DoF, but the motion of robot is mainly 3-DoF. The classical hand-eye calibration method used in unmanned aerial vehicles or handheld devices requires the sensors to translate and rotate in each direction, which is obviously not applicable in wheeled robots. The problem is to calculate the 6-DoF extrinsic parameters under motion limitation mentioned above. The paper describes a novel hand-eye calibration method, which designs specific robot motions and makes full use of the motion characteristics, to break the motion limitation. The trajectories of linear motion and steady-state rotation are used to solve the rotation and translation, respectively. In addition, we improve the traditional artificial marker localization method by introducing a flat ground assumption. The proposed method is tested with simulation, static and dynamic experiments, and compared with classical methods. The results demonstrate that the accuracy and efficiency of the method are more advanced than that of classical methods.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2023.3294294