IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields

We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of large-scale indoor scenes into intrinsic components. Recent approaches further decompose the baked radiance of the implicit volume into intrinsic components such that one can partially approximate the rendering equation. Howe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Choi, Changwoon, Kim, Juhyeon, Kim, Young Min
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Choi, Changwoon
Kim, Juhyeon
Kim, Young Min
description We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of large-scale indoor scenes into intrinsic components. Recent approaches further decompose the baked radiance of the implicit volume into intrinsic components such that one can partially approximate the rendering equation. However, they are limited to representing isolated objects with a shared environment lighting, and suffer from computational burden to aggregate rays with Monte Carlo integration. In contrast, our prefiltered radiance field extends the original NeRF formulation to capture the spatial variation of lighting within the scene volume, in addition to surface properties. Specifically, the scenes of diverse materials are decomposed into intrinsic components for rendering, namely, albedo, roughness, surface normal, irradiance, and prefiltered radiance. All of the components are inferred as neural images from MLP, which can model large-scale general scenes. Especially the prefiltered radiance effectively models the volumetric light field, and captures spatial variation beyond a single environment light. The prefiltering aggregates rays in a set of predefined neighborhood sizes such that we can replace the costly Monte Carlo integration of global illumination with a simple query from a neural image. By adopting NeRF, our approach inherits superior visual quality and multi-view consistency for synthesized images as well as the intrinsic components. We demonstrate the performance on scenes with complex object layouts and light configurations, which could not be processed in any of the previous works.
doi_str_mv 10.48550/arxiv.2210.08202
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2210_08202</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2210_08202</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-3549ce5288b1b52297652e25f54f4f0977e71fa98ab1c94a2b966d29d6ceac193</originalsourceid><addsrcrecordid>eNotz71OwzAUhmEvDKhwAUz4Blzikzi22WghECkqUtU9OrGPg6X8IKdFcPdAYfqkd_ikh7Ebma0Lo1R2h-kzfqwBfkJmIINL9lhvGrGjfXXP6xF7EhtcyPMm9m_HOPW8mtN4GvAY54nPge_olHDge_QRJ0e8ijT45YpdBBwWuv7fFTtUT4fti2hen-vtQyOw1CByVVhHCozpZKcArC4VEKigilCEzGpNWga0BjvpbIHQ2bL0YH3pCJ20-Yrd_t2eGe17iiOmr_aX0545-Tfv30M0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields</title><source>arXiv.org</source><creator>Choi, Changwoon ; Kim, Juhyeon ; Kim, Young Min</creator><creatorcontrib>Choi, Changwoon ; Kim, Juhyeon ; Kim, Young Min</creatorcontrib><description>We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of large-scale indoor scenes into intrinsic components. Recent approaches further decompose the baked radiance of the implicit volume into intrinsic components such that one can partially approximate the rendering equation. However, they are limited to representing isolated objects with a shared environment lighting, and suffer from computational burden to aggregate rays with Monte Carlo integration. In contrast, our prefiltered radiance field extends the original NeRF formulation to capture the spatial variation of lighting within the scene volume, in addition to surface properties. Specifically, the scenes of diverse materials are decomposed into intrinsic components for rendering, namely, albedo, roughness, surface normal, irradiance, and prefiltered radiance. All of the components are inferred as neural images from MLP, which can model large-scale general scenes. Especially the prefiltered radiance effectively models the volumetric light field, and captures spatial variation beyond a single environment light. The prefiltering aggregates rays in a set of predefined neighborhood sizes such that we can replace the costly Monte Carlo integration of global illumination with a simple query from a neural image. By adopting NeRF, our approach inherits superior visual quality and multi-view consistency for synthesized images as well as the intrinsic components. We demonstrate the performance on scenes with complex object layouts and light configurations, which could not be processed in any of the previous works.</description><identifier>DOI: 10.48550/arxiv.2210.08202</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2210.08202$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2210.08202$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Choi, Changwoon</creatorcontrib><creatorcontrib>Kim, Juhyeon</creatorcontrib><creatorcontrib>Kim, Young Min</creatorcontrib><title>IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields</title><description>We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of large-scale indoor scenes into intrinsic components. Recent approaches further decompose the baked radiance of the implicit volume into intrinsic components such that one can partially approximate the rendering equation. However, they are limited to representing isolated objects with a shared environment lighting, and suffer from computational burden to aggregate rays with Monte Carlo integration. In contrast, our prefiltered radiance field extends the original NeRF formulation to capture the spatial variation of lighting within the scene volume, in addition to surface properties. Specifically, the scenes of diverse materials are decomposed into intrinsic components for rendering, namely, albedo, roughness, surface normal, irradiance, and prefiltered radiance. All of the components are inferred as neural images from MLP, which can model large-scale general scenes. Especially the prefiltered radiance effectively models the volumetric light field, and captures spatial variation beyond a single environment light. The prefiltering aggregates rays in a set of predefined neighborhood sizes such that we can replace the costly Monte Carlo integration of global illumination with a simple query from a neural image. By adopting NeRF, our approach inherits superior visual quality and multi-view consistency for synthesized images as well as the intrinsic components. We demonstrate the performance on scenes with complex object layouts and light configurations, which could not be processed in any of the previous works.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUhmEvDKhwAUz4Blzikzi22WghECkqUtU9OrGPg6X8IKdFcPdAYfqkd_ikh7Ebma0Lo1R2h-kzfqwBfkJmIINL9lhvGrGjfXXP6xF7EhtcyPMm9m_HOPW8mtN4GvAY54nPge_olHDge_QRJ0e8ijT45YpdBBwWuv7fFTtUT4fti2hen-vtQyOw1CByVVhHCozpZKcArC4VEKigilCEzGpNWga0BjvpbIHQ2bL0YH3pCJ20-Yrd_t2eGe17iiOmr_aX0545-Tfv30M0</recordid><startdate>20221015</startdate><enddate>20221015</enddate><creator>Choi, Changwoon</creator><creator>Kim, Juhyeon</creator><creator>Kim, Young Min</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221015</creationdate><title>IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields</title><author>Choi, Changwoon ; Kim, Juhyeon ; Kim, Young Min</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-3549ce5288b1b52297652e25f54f4f0977e71fa98ab1c94a2b966d29d6ceac193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Choi, Changwoon</creatorcontrib><creatorcontrib>Kim, Juhyeon</creatorcontrib><creatorcontrib>Kim, Young Min</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Choi, Changwoon</au><au>Kim, Juhyeon</au><au>Kim, Young Min</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields</atitle><date>2022-10-15</date><risdate>2022</risdate><abstract>We propose IBL-NeRF, which decomposes the neural radiance fields (NeRF) of large-scale indoor scenes into intrinsic components. Recent approaches further decompose the baked radiance of the implicit volume into intrinsic components such that one can partially approximate the rendering equation. However, they are limited to representing isolated objects with a shared environment lighting, and suffer from computational burden to aggregate rays with Monte Carlo integration. In contrast, our prefiltered radiance field extends the original NeRF formulation to capture the spatial variation of lighting within the scene volume, in addition to surface properties. Specifically, the scenes of diverse materials are decomposed into intrinsic components for rendering, namely, albedo, roughness, surface normal, irradiance, and prefiltered radiance. All of the components are inferred as neural images from MLP, which can model large-scale general scenes. Especially the prefiltered radiance effectively models the volumetric light field, and captures spatial variation beyond a single environment light. The prefiltering aggregates rays in a set of predefined neighborhood sizes such that we can replace the costly Monte Carlo integration of global illumination with a simple query from a neural image. By adopting NeRF, our approach inherits superior visual quality and multi-view consistency for synthesized images as well as the intrinsic components. We demonstrate the performance on scenes with complex object layouts and light configurations, which could not be processed in any of the previous works.</abstract><doi>10.48550/arxiv.2210.08202</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2210.08202
ispartof
issn
language eng
recordid cdi_arxiv_primary_2210_08202
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title IBL-NeRF: Image-Based Lighting Formulation of Neural Radiance Fields
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T09%3A24%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=IBL-NeRF:%20Image-Based%20Lighting%20Formulation%20of%20Neural%20Radiance%20Fields&rft.au=Choi,%20Changwoon&rft.date=2022-10-15&rft_id=info:doi/10.48550/arxiv.2210.08202&rft_dat=%3Carxiv_GOX%3E2210_08202%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true