Points2NeRF: Generating Neural Radiance Fields from 3D point cloud
Contemporary registration devices for 3D visual information, such as LIDARs and various depth cameras, capture data as 3D point clouds. In turn, such clouds are challenging to be processed due to their size and complexity. Existing methods address this problem by fitting a mesh to the point cloud an...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Contemporary registration devices for 3D visual information, such as LIDARs
and various depth cameras, capture data as 3D point clouds. In turn, such
clouds are challenging to be processed due to their size and complexity.
Existing methods address this problem by fitting a mesh to the point cloud and
rendering it instead. This approach, however, leads to the reduced fidelity of
the resulting visualization and misses color information of the objects crucial
in computer graphics applications. In this work, we propose to mitigate this
challenge by representing 3D objects as Neural Radiance Fields (NeRFs). We
leverage a hypernetwork paradigm and train the model to take a 3D point cloud
with the associated color values and return a NeRF network's weights that
reconstruct 3D objects from input 2D images. Our method provides efficient 3D
object representation and offers several advantages over the existing
approaches, including the ability to condition NeRFs and improved
generalization beyond objects seen in training. The latter we also confirmed in
the results of our empirical evaluation. |
---|---|
DOI: | 10.48550/arxiv.2206.01290 |