Neural Volume Super-Resolution
Neural volumetric representations have become a widely adopted model for radiance fields in 3D scenes. These representations are fully implicit or hybrid function approximators of the instantaneous volumetric radiance in a scene, which are typically learned from multi-view captures of the scene. We...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural volumetric representations have become a widely adopted model for
radiance fields in 3D scenes. These representations are fully implicit or
hybrid function approximators of the instantaneous volumetric radiance in a
scene, which are typically learned from multi-view captures of the scene. We
investigate the new task of neural volume super-resolution - rendering
high-resolution views corresponding to a scene captured at low resolution. To
this end, we propose a neural super-resolution network that operates directly
on the volumetric representation of the scene. This approach allows us to
exploit an advantage of operating in the volumetric domain, namely the ability
to guarantee consistent super-resolution across different viewing directions.
To realize our method, we devise a novel 3D representation that hinges on
multiple 2D feature planes. This allows us to super-resolve the 3D scene
representation by applying 2D convolutional networks on the 2D feature planes.
We validate the proposed method by super-resolving multi-view consistent views
on a diverse set of unseen 3D scenes, confirming qualitative and quantitatively
favorable quality over existing approaches. |
---|---|
DOI: | 10.48550/arxiv.2212.04666 |