Efficient Implicit Neural Reconstruction Using LiDAR
Modeling scene geometry using implicit neural representation has revealed its advantages in accuracy, flexibility, and low memory usage. Previous approaches have demonstrated impressive results using color or depth images but still have difficulty handling poor light conditions and large-scale scene...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Modeling scene geometry using implicit neural representation has revealed its
advantages in accuracy, flexibility, and low memory usage. Previous approaches
have demonstrated impressive results using color or depth images but still have
difficulty handling poor light conditions and large-scale scenes. Methods
taking global point cloud as input require accurate registration and ground
truth coordinate labels, which limits their application scenarios. In this
paper, we propose a new method that uses sparse LiDAR point clouds and rough
odometry to reconstruct fine-grained implicit occupancy field efficiently
within a few minutes. We introduce a new loss function that supervises directly
in 3D space without 2D rendering, avoiding information loss. We also manage to
refine poses of input frames in an end-to-end manner, creating consistent
geometry without global point cloud registration. As far as we know, our method
is the first to reconstruct implicit scene representation from LiDAR-only
input. Experiments on synthetic and real-world datasets, including indoor and
outdoor scenes, prove that our method is effective, efficient, and accurate,
obtaining comparable results with existing methods using dense input. |
---|---|
DOI: | 10.48550/arxiv.2302.14363 |