Efficient NeRF Optimization -- Not All Samples Remain Equally Hard
We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF). NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources. The encoding of the scene information within th...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-08 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Korhonen, Juuso Rangu, Goutham Tavakoli, Hamed R Kannala, Juho |
description | We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF). NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources. The encoding of the scene information within the NeRF network parameters necessitates stochastic sampling. We observe that during the training, a major part of the compute time and memory usage is spent on processing already learnt samples, which no longer affect the model update significantly. We identify the backward pass on the stochastic samples as the computational bottleneck during the optimization. We thus perform the first forward pass in inference mode as a relatively low-cost search for hard samples. This is followed by building the computational graph and updating the NeRF network parameters using only the hard samples. To demonstrate the effectiveness of the proposed approach, we apply our method to Instant-NGP, resulting in significant improvements of the view-synthesis quality over the baseline (1 dB improvement on average per training time, or 2x speedup to reach the same PSNR level) along with approx. 40% memory savings coming from using only the hard samples to build the computational graph. As our method only interfaces with the network module, we expect it to be widely applicable. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3090049823</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3090049823</sourcerecordid><originalsourceid>FETCH-proquest_journals_30900498233</originalsourceid><addsrcrecordid>eNqNjr0KwjAYAIMgWLTv8IFzICattqNKSqcK1b0ETSElP22SDvr0dvABnG64G26FEsrYARcZpRuUhjAQQujxRPOcJejC-149lbQRGtlWcBujMuojonIWMIbGRThrDXdhRi0DtNIIZYFPs9D6DbXwrx1a90IHmf64RfuKP641Hr2bZhliN7jZ20V1jJSEZGWxDP1XfQH7JThD</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3090049823</pqid></control><display><type>article</type><title>Efficient NeRF Optimization -- Not All Samples Remain Equally Hard</title><source>Free E- Journals</source><creator>Korhonen, Juuso ; Rangu, Goutham ; Tavakoli, Hamed R ; Kannala, Juho</creator><creatorcontrib>Korhonen, Juuso ; Rangu, Goutham ; Tavakoli, Hamed R ; Kannala, Juho</creatorcontrib><description>We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF). NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources. The encoding of the scene information within the NeRF network parameters necessitates stochastic sampling. We observe that during the training, a major part of the compute time and memory usage is spent on processing already learnt samples, which no longer affect the model update significantly. We identify the backward pass on the stochastic samples as the computational bottleneck during the optimization. We thus perform the first forward pass in inference mode as a relatively low-cost search for hard samples. This is followed by building the computational graph and updating the NeRF network parameters using only the hard samples. To demonstrate the effectiveness of the proposed approach, we apply our method to Instant-NGP, resulting in significant improvements of the view-synthesis quality over the baseline (1 dB improvement on average per training time, or 2x speedup to reach the same PSNR level) along with approx. 40% memory savings coming from using only the hard samples to build the computational graph. As our method only interfaces with the network module, we expect it to be widely applicable.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Graphical user interface ; Memory tasks ; Optimization ; Parameter identification</subject><ispartof>arXiv.org, 2024-08</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Korhonen, Juuso</creatorcontrib><creatorcontrib>Rangu, Goutham</creatorcontrib><creatorcontrib>Tavakoli, Hamed R</creatorcontrib><creatorcontrib>Kannala, Juho</creatorcontrib><title>Efficient NeRF Optimization -- Not All Samples Remain Equally Hard</title><title>arXiv.org</title><description>We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF). NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources. The encoding of the scene information within the NeRF network parameters necessitates stochastic sampling. We observe that during the training, a major part of the compute time and memory usage is spent on processing already learnt samples, which no longer affect the model update significantly. We identify the backward pass on the stochastic samples as the computational bottleneck during the optimization. We thus perform the first forward pass in inference mode as a relatively low-cost search for hard samples. This is followed by building the computational graph and updating the NeRF network parameters using only the hard samples. To demonstrate the effectiveness of the proposed approach, we apply our method to Instant-NGP, resulting in significant improvements of the view-synthesis quality over the baseline (1 dB improvement on average per training time, or 2x speedup to reach the same PSNR level) along with approx. 40% memory savings coming from using only the hard samples to build the computational graph. As our method only interfaces with the network module, we expect it to be widely applicable.</description><subject>Graphical user interface</subject><subject>Memory tasks</subject><subject>Optimization</subject><subject>Parameter identification</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjr0KwjAYAIMgWLTv8IFzICattqNKSqcK1b0ETSElP22SDvr0dvABnG64G26FEsrYARcZpRuUhjAQQujxRPOcJejC-149lbQRGtlWcBujMuojonIWMIbGRThrDXdhRi0DtNIIZYFPs9D6DbXwrx1a90IHmf64RfuKP641Hr2bZhliN7jZ20V1jJSEZGWxDP1XfQH7JThD</recordid><startdate>20240806</startdate><enddate>20240806</enddate><creator>Korhonen, Juuso</creator><creator>Rangu, Goutham</creator><creator>Tavakoli, Hamed R</creator><creator>Kannala, Juho</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240806</creationdate><title>Efficient NeRF Optimization -- Not All Samples Remain Equally Hard</title><author>Korhonen, Juuso ; Rangu, Goutham ; Tavakoli, Hamed R ; Kannala, Juho</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30900498233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Graphical user interface</topic><topic>Memory tasks</topic><topic>Optimization</topic><topic>Parameter identification</topic><toplevel>online_resources</toplevel><creatorcontrib>Korhonen, Juuso</creatorcontrib><creatorcontrib>Rangu, Goutham</creatorcontrib><creatorcontrib>Tavakoli, Hamed R</creatorcontrib><creatorcontrib>Kannala, Juho</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Korhonen, Juuso</au><au>Rangu, Goutham</au><au>Tavakoli, Hamed R</au><au>Kannala, Juho</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Efficient NeRF Optimization -- Not All Samples Remain Equally Hard</atitle><jtitle>arXiv.org</jtitle><date>2024-08-06</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF). NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources. The encoding of the scene information within the NeRF network parameters necessitates stochastic sampling. We observe that during the training, a major part of the compute time and memory usage is spent on processing already learnt samples, which no longer affect the model update significantly. We identify the backward pass on the stochastic samples as the computational bottleneck during the optimization. We thus perform the first forward pass in inference mode as a relatively low-cost search for hard samples. This is followed by building the computational graph and updating the NeRF network parameters using only the hard samples. To demonstrate the effectiveness of the proposed approach, we apply our method to Instant-NGP, resulting in significant improvements of the view-synthesis quality over the baseline (1 dB improvement on average per training time, or 2x speedup to reach the same PSNR level) along with approx. 40% memory savings coming from using only the hard samples to build the computational graph. As our method only interfaces with the network module, we expect it to be widely applicable.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3090049823 |
source | Free E- Journals |
subjects | Graphical user interface Memory tasks Optimization Parameter identification |
title | Efficient NeRF Optimization -- Not All Samples Remain Equally Hard |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T21%3A02%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Efficient%20NeRF%20Optimization%20--%20Not%20All%20Samples%20Remain%20Equally%20Hard&rft.jtitle=arXiv.org&rft.au=Korhonen,%20Juuso&rft.date=2024-08-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3090049823%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3090049823&rft_id=info:pmid/&rfr_iscdi=true |