Recursive-NeRF: An Efficient and Dynamically Growing NeRF

View synthesis methods using implicit continuous shape representations learned from a set of images, such as the Neural Radiance Field (NeRF) method, have gained increasing attention due to their high quality imagery and scalability to high resolution. However, the heavy computation required by its...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on visualization and computer graphics 2023-12, Vol.29 (12), p.1-14
Hauptverfasser: Yang, Guo-Wei, Zhou, Wen-Yang, Peng, Hao-Yang, Liang, Dun, Mu, Tai-Jiang, Hu, Shi-Min
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 14
container_issue 12
container_start_page 1
container_title IEEE transactions on visualization and computer graphics
container_volume 29
creator Yang, Guo-Wei
Zhou, Wen-Yang
Peng, Hao-Yang
Liang, Dun
Mu, Tai-Jiang
Hu, Shi-Min
description View synthesis methods using implicit continuous shape representations learned from a set of images, such as the Neural Radiance Field (NeRF) method, have gained increasing attention due to their high quality imagery and scalability to high resolution. However, the heavy computation required by its volumetric approach prevents NeRF from being useful in practice; minutes are taken to render a single image of a few megapixels. Now, an image of a scene can be rendered in a level-of-detail manner, so we posit that a complicated region of the scene should be represented by a large neural network while a small neural network is capable of encoding a simple region, enabling a balance between efficiency and quality. Recursive-NeRF is our embodiment of this idea, providing an efficient and adaptive rendering and training approach for NeRF. The core of Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level. Only query coordinates with high uncertainties are forwarded to the next level to a bigger neural network with a more powerful representational capability. The final rendered image is a composition of results from neural networks of all levels. Our evaluation on public datasets and a large-scale scene dataset we collected shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality. The code will be available at https://github.com/Gword/Recursive-NeRF.
doi_str_mv 10.1109/TVCG.2022.3204608
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9909994</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9909994</ieee_id><sourcerecordid>2721635936</sourcerecordid><originalsourceid>FETCH-LOGICAL-c326t-c6bc410514f207398d4b92e72c6a363031b7085e1e89e35bbd0d652a6d01e24a3</originalsourceid><addsrcrecordid>eNpdkE1Lw0AURQdRsEZ_gLgJuHGT-uYjk4w7qW0VikKpbofJ5EWmpEmdaZT-exNaXLh6d3Hu5XEIuaYwphTU_epjMh8zYGzMGQgJ-QkZUSVoAinI0z5DliVMMnlOLkJYA1AhcjUiaom288F9Y_KKy9lD_NjE06py1mGzi01Txk_7xmycNXW9j-e-_XHNZzygl-SsMnXAq-ONyPtsupo8J4u3-cvkcZFYzuQusbKwgkJKRcUg4yovRaEYZsxKwyUHTosM8hQp5gp5WhQllDJlRpZAkQnDI3J32N369qvDsNMbFyzWtWmw7YJmGaOSp6ofi8jtP3Tddr7pv9Msz1XGIe25iNADZX0bgsdKb73bGL_XFPQgUw8y9SBTH2X2nZtDxyHiH68UKKUE_wU7OGzY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2889730559</pqid></control><display><type>article</type><title>Recursive-NeRF: An Efficient and Dynamically Growing NeRF</title><source>IEEE Electronic Library (IEL)</source><creator>Yang, Guo-Wei ; Zhou, Wen-Yang ; Peng, Hao-Yang ; Liang, Dun ; Mu, Tai-Jiang ; Hu, Shi-Min</creator><creatorcontrib>Yang, Guo-Wei ; Zhou, Wen-Yang ; Peng, Hao-Yang ; Liang, Dun ; Mu, Tai-Jiang ; Hu, Shi-Min</creatorcontrib><description>View synthesis methods using implicit continuous shape representations learned from a set of images, such as the Neural Radiance Field (NeRF) method, have gained increasing attention due to their high quality imagery and scalability to high resolution. However, the heavy computation required by its volumetric approach prevents NeRF from being useful in practice; minutes are taken to render a single image of a few megapixels. Now, an image of a scene can be rendered in a level-of-detail manner, so we posit that a complicated region of the scene should be represented by a large neural network while a small neural network is capable of encoding a simple region, enabling a balance between efficiency and quality. Recursive-NeRF is our embodiment of this idea, providing an efficient and adaptive rendering and training approach for NeRF. The core of Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level. Only query coordinates with high uncertainties are forwarded to the next level to a bigger neural network with a more powerful representational capability. The final rendered image is a composition of results from neural networks of all levels. Our evaluation on public datasets and a large-scale scene dataset we collected shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality. The code will be available at https://github.com/Gword/Recursive-NeRF.</description><identifier>ISSN: 1077-2626</identifier><identifier>EISSN: 1941-0506</identifier><identifier>DOI: 10.1109/TVCG.2022.3204608</identifier><identifier>CODEN: ITVGEA</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>3D deep learning ; Complexity theory ; Datasets ; Image color analysis ; Image quality ; image-based rendering ; Neural networks ; Rendering (computer graphics) ; scene representation ; Three-dimensional displays ; Training ; Uncertainty ; view synthesis ; volume rendering</subject><ispartof>IEEE transactions on visualization and computer graphics, 2023-12, Vol.29 (12), p.1-14</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c326t-c6bc410514f207398d4b92e72c6a363031b7085e1e89e35bbd0d652a6d01e24a3</citedby><cites>FETCH-LOGICAL-c326t-c6bc410514f207398d4b92e72c6a363031b7085e1e89e35bbd0d652a6d01e24a3</cites><orcidid>0000-0002-9197-346X ; 0000-0001-7507-6542</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9909994$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9909994$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Yang, Guo-Wei</creatorcontrib><creatorcontrib>Zhou, Wen-Yang</creatorcontrib><creatorcontrib>Peng, Hao-Yang</creatorcontrib><creatorcontrib>Liang, Dun</creatorcontrib><creatorcontrib>Mu, Tai-Jiang</creatorcontrib><creatorcontrib>Hu, Shi-Min</creatorcontrib><title>Recursive-NeRF: An Efficient and Dynamically Growing NeRF</title><title>IEEE transactions on visualization and computer graphics</title><addtitle>TVCG</addtitle><description>View synthesis methods using implicit continuous shape representations learned from a set of images, such as the Neural Radiance Field (NeRF) method, have gained increasing attention due to their high quality imagery and scalability to high resolution. However, the heavy computation required by its volumetric approach prevents NeRF from being useful in practice; minutes are taken to render a single image of a few megapixels. Now, an image of a scene can be rendered in a level-of-detail manner, so we posit that a complicated region of the scene should be represented by a large neural network while a small neural network is capable of encoding a simple region, enabling a balance between efficiency and quality. Recursive-NeRF is our embodiment of this idea, providing an efficient and adaptive rendering and training approach for NeRF. The core of Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level. Only query coordinates with high uncertainties are forwarded to the next level to a bigger neural network with a more powerful representational capability. The final rendered image is a composition of results from neural networks of all levels. Our evaluation on public datasets and a large-scale scene dataset we collected shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality. The code will be available at https://github.com/Gword/Recursive-NeRF.</description><subject>3D deep learning</subject><subject>Complexity theory</subject><subject>Datasets</subject><subject>Image color analysis</subject><subject>Image quality</subject><subject>image-based rendering</subject><subject>Neural networks</subject><subject>Rendering (computer graphics)</subject><subject>scene representation</subject><subject>Three-dimensional displays</subject><subject>Training</subject><subject>Uncertainty</subject><subject>view synthesis</subject><subject>volume rendering</subject><issn>1077-2626</issn><issn>1941-0506</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1Lw0AURQdRsEZ_gLgJuHGT-uYjk4w7qW0VikKpbofJ5EWmpEmdaZT-exNaXLh6d3Hu5XEIuaYwphTU_epjMh8zYGzMGQgJ-QkZUSVoAinI0z5DliVMMnlOLkJYA1AhcjUiaom288F9Y_KKy9lD_NjE06py1mGzi01Txk_7xmycNXW9j-e-_XHNZzygl-SsMnXAq-ONyPtsupo8J4u3-cvkcZFYzuQusbKwgkJKRcUg4yovRaEYZsxKwyUHTosM8hQp5gp5WhQllDJlRpZAkQnDI3J32N369qvDsNMbFyzWtWmw7YJmGaOSp6ofi8jtP3Tddr7pv9Msz1XGIe25iNADZX0bgsdKb73bGL_XFPQgUw8y9SBTH2X2nZtDxyHiH68UKKUE_wU7OGzY</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Yang, Guo-Wei</creator><creator>Zhou, Wen-Yang</creator><creator>Peng, Hao-Yang</creator><creator>Liang, Dun</creator><creator>Mu, Tai-Jiang</creator><creator>Hu, Shi-Min</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-9197-346X</orcidid><orcidid>https://orcid.org/0000-0001-7507-6542</orcidid></search><sort><creationdate>20231201</creationdate><title>Recursive-NeRF: An Efficient and Dynamically Growing NeRF</title><author>Yang, Guo-Wei ; Zhou, Wen-Yang ; Peng, Hao-Yang ; Liang, Dun ; Mu, Tai-Jiang ; Hu, Shi-Min</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c326t-c6bc410514f207398d4b92e72c6a363031b7085e1e89e35bbd0d652a6d01e24a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>3D deep learning</topic><topic>Complexity theory</topic><topic>Datasets</topic><topic>Image color analysis</topic><topic>Image quality</topic><topic>image-based rendering</topic><topic>Neural networks</topic><topic>Rendering (computer graphics)</topic><topic>scene representation</topic><topic>Three-dimensional displays</topic><topic>Training</topic><topic>Uncertainty</topic><topic>view synthesis</topic><topic>volume rendering</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Guo-Wei</creatorcontrib><creatorcontrib>Zhou, Wen-Yang</creatorcontrib><creatorcontrib>Peng, Hao-Yang</creatorcontrib><creatorcontrib>Liang, Dun</creatorcontrib><creatorcontrib>Mu, Tai-Jiang</creatorcontrib><creatorcontrib>Hu, Shi-Min</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on visualization and computer graphics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Guo-Wei</au><au>Zhou, Wen-Yang</au><au>Peng, Hao-Yang</au><au>Liang, Dun</au><au>Mu, Tai-Jiang</au><au>Hu, Shi-Min</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Recursive-NeRF: An Efficient and Dynamically Growing NeRF</atitle><jtitle>IEEE transactions on visualization and computer graphics</jtitle><stitle>TVCG</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>29</volume><issue>12</issue><spage>1</spage><epage>14</epage><pages>1-14</pages><issn>1077-2626</issn><eissn>1941-0506</eissn><coden>ITVGEA</coden><abstract>View synthesis methods using implicit continuous shape representations learned from a set of images, such as the Neural Radiance Field (NeRF) method, have gained increasing attention due to their high quality imagery and scalability to high resolution. However, the heavy computation required by its volumetric approach prevents NeRF from being useful in practice; minutes are taken to render a single image of a few megapixels. Now, an image of a scene can be rendered in a level-of-detail manner, so we posit that a complicated region of the scene should be represented by a large neural network while a small neural network is capable of encoding a simple region, enabling a balance between efficiency and quality. Recursive-NeRF is our embodiment of this idea, providing an efficient and adaptive rendering and training approach for NeRF. The core of Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level. Only query coordinates with high uncertainties are forwarded to the next level to a bigger neural network with a more powerful representational capability. The final rendered image is a composition of results from neural networks of all levels. Our evaluation on public datasets and a large-scale scene dataset we collected shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality. The code will be available at https://github.com/Gword/Recursive-NeRF.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TVCG.2022.3204608</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-9197-346X</orcidid><orcidid>https://orcid.org/0000-0001-7507-6542</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1077-2626
ispartof IEEE transactions on visualization and computer graphics, 2023-12, Vol.29 (12), p.1-14
issn 1077-2626
1941-0506
language eng
recordid cdi_ieee_primary_9909994
source IEEE Electronic Library (IEL)
subjects 3D deep learning
Complexity theory
Datasets
Image color analysis
Image quality
image-based rendering
Neural networks
Rendering (computer graphics)
scene representation
Three-dimensional displays
Training
Uncertainty
view synthesis
volume rendering
title Recursive-NeRF: An Efficient and Dynamically Growing NeRF
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T08%3A04%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Recursive-NeRF:%20An%20Efficient%20and%20Dynamically%20Growing%20NeRF&rft.jtitle=IEEE%20transactions%20on%20visualization%20and%20computer%20graphics&rft.au=Yang,%20Guo-Wei&rft.date=2023-12-01&rft.volume=29&rft.issue=12&rft.spage=1&rft.epage=14&rft.pages=1-14&rft.issn=1077-2626&rft.eissn=1941-0506&rft.coden=ITVGEA&rft_id=info:doi/10.1109/TVCG.2022.3204608&rft_dat=%3Cproquest_RIE%3E2721635936%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2889730559&rft_id=info:pmid/&rft_ieee_id=9909994&rfr_iscdi=true