IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields
We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of large-scale indoor scenes into intrinsic components. Recent approaches further decompose the baked radiance of the implicit volume into intrinsic components such that one can partially approximate the rendering equation. Howe...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of
large-scale indoor scenes into intrinsic components. Recent approaches further
decompose the baked radiance of the implicit volume into intrinsic components
such that one can partially approximate the rendering equation. However, they
are limited to representing isolated objects with a shared environment
lighting, and suffer from computational burden to aggregate rays with Monte
Carlo integration. In contrast, our prefiltered radiance field extends the
original NeRF formulation to capture the spatial variation of lighting within
the scene volume, in addition to surface properties. Specifically, the scenes
of diverse materials are decomposed into intrinsic components for rendering,
namely, albedo, roughness, surface normal, irradiance, and prefiltered
radiance. All of the components are inferred as neural images from MLP, which
can model large-scale general scenes. Especially the prefiltered radiance
effectively models the volumetric light field, and captures spatial variation
beyond a single environment light. The prefiltering aggregates rays in a set of
predefined neighborhood sizes such that we can replace the costly Monte Carlo
integration of global illumination with a simple query from a neural image. By
adopting NeRF, our approach inherits superior visual quality and multi-view
consistency for synthesized images as well as the intrinsic components. We
demonstrate the performance on scenes with complex object layouts and light
configurations, which could not be processed in any of the previous works. |
---|---|
DOI: | 10.48550/arxiv.2210.08202 |