LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields
We introduce a new task, novel view synthesis for LiDAR sensors. While traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views, they fall short of producing accurate and realistic LiDAR patterns because the renderers rely on explicit 3D recon...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a new task, novel view synthesis for LiDAR sensors. While
traditional model-based LiDAR simulators with style-transfer neural networks
can be applied to render novel views, they fall short of producing accurate and
realistic LiDAR patterns because the renderers rely on explicit 3D
reconstruction and exploit game engines, that ignore important attributes of
LiDAR points. We address this challenge by formulating, to the best of our
knowledge, the first differentiable end-to-end LiDAR rendering framework,
LiDAR-NeRF, leveraging a neural radiance field (NeRF) to facilitate the joint
learning of geometry and the attributes of 3D points. However, simply employing
NeRF cannot achieve satisfactory results, as it only focuses on learning
individual pixels while ignoring local information, especially at low texture
areas, resulting in poor geometry. To this end, we have taken steps to address
this issue by introducing a structural regularization method to preserve local
structural details. To evaluate the effectiveness of our approach, we establish
an object-centric multi-view LiDAR dataset, dubbed NeRF-MVL. It contains
observations of objects from 9 categories seen from 360-degree viewpoints
captured with multiple LiDAR sensors. Our extensive experiments on the
scene-level KITTI-360 dataset, and on our object-level NeRF-MVL show that our
LiDAR-NeRF surpasses the model-based algorithms significantly. |
---|---|
DOI: | 10.48550/arxiv.2304.10406 |