Neural Surface Reconstruction and Rendering for LiDAR-Visual Systems
This paper presents a unified surface reconstruction and rendering framework for LiDAR-visual systems, integrating Neural Radiance Fields (NeRF) and Neural Distance Fields (NDF) to recover both appearance and structural information from posed images and point clouds. We address the structural visibl...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a unified surface reconstruction and rendering framework
for LiDAR-visual systems, integrating Neural Radiance Fields (NeRF) and Neural
Distance Fields (NDF) to recover both appearance and structural information
from posed images and point clouds. We address the structural visible gap
between NeRF and NDF by utilizing a visible-aware occupancy map to classify
space into the free, occupied, visible unknown, and background regions. This
classification facilitates the recovery of a complete appearance and structure
of the scene. We unify the training of the NDF and NeRF using a spatial-varying
scale SDF-to-density transformation for levels of detail for both structure and
appearance. The proposed method leverages the learned NDF for structure-aware
NeRF training by an adaptive sphere tracing sampling strategy for accurate
structure rendering. In return, NeRF further refines structural in recovering
missing or fuzzy structures in the NDF. Extensive experiments demonstrate the
superior quality and versatility of the proposed method across various
scenarios. To benefit the community, the codes will be released at
\url{https://github.com/hku-mars/M2Mapping}. |
---|---|
DOI: | 10.48550/arxiv.2409.05310 |