3DGS-Calib: 3D Gaussian Splatting for Multimodal SpatioTemporal Calibration
Reliable multimodal sensor fusion algorithms require accurate spatiotemporal calibration. Recently, targetless calibration techniques based on implicit neural representations have proven to provide precise and robust results. Nevertheless, such methods are inherently slow to train given the high com...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reliable multimodal sensor fusion algorithms require accurate spatiotemporal
calibration. Recently, targetless calibration techniques based on implicit
neural representations have proven to provide precise and robust results.
Nevertheless, such methods are inherently slow to train given the high
computational overhead caused by the large number of sampled points required
for volume rendering. With the recent introduction of 3D Gaussian Splatting as
a faster alternative to implicit representation methods, we propose to leverage
this new rendering approach to achieve faster multi-sensor calibration. We
introduce 3DGS-Calib, a new calibration method that relies on the speed and
rendering accuracy of 3D Gaussian Splatting to achieve multimodal
spatiotemporal calibration that is accurate, robust, and with a substantial
speed-up compared to methods relying on implicit neural representations. We
demonstrate the superiority of our proposal with experimental results on
sequences from KITTI-360, a widely used driving dataset. |
---|---|
DOI: | 10.48550/arxiv.2403.11577 |