NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds

We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room. NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views. To...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Yang, Chen, Li, Peihao, Zhou, Zanwei, Yuan, Shanxin, Liu, Bingbing, Yang, Xiaokang, Qiu, Weichao, Shen, Wei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Yang, Chen
Li, Peihao
Zhou, Zanwei
Yuan, Shanxin
Liu, Bingbing
Yang, Xiaokang
Qiu, Weichao
Shen, Wei
description We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room. NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views. To address this issue, we utilize the holistic priors, including pseudo depth maps and view coverage information, from neural reconstruction to guide the learning of implicit neural representations of 3D indoor scenes. Concretely, an off-the-shelf neural reconstruction method is leveraged to generate a geometry scaffold. Then, two loss functions based on the holistic priors are proposed to improve the learning of NeRF: 1) A robust depth loss that can tolerate the error of the pseudo depth map to guide the geometry learning of NeRF; 2) A variance loss to regularize the variance of implicit neural representations to reduce the geometry and color ambiguity in the learning procedure. These two loss functions are modulated during NeRF optimization according to the view coverage information to reduce the negative influence brought by the view coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms state-of-the-art view synthesis methods quantitatively and qualitatively on indoor scenes, achieving high-fidelity free navigation results.
doi_str_mv 10.48550/arxiv.2304.06287
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2304_06287</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2304_06287</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-d4093c697ab48e2e992b932ba3af656706607cd7afa77bac026d5df941889d663</originalsourceid><addsrcrecordid>eNotz7FOwzAUhWEvDKjwAEz4BRJc27m22VBFClIpUlN1jW7sa2EpbZBTCnl7oDCd5deRPsZu5qLUtqrEHeavdCqlEroUIK25ZC9r2tS75p6v6SNjzzcYEh488TpRH0Yeh8zrTMR3iT55Mx2ObzSmkZ8S8iUNezrmiTceYxx-8it2EbEf6fp_Z2xbP24XT8Xqdfm8eFgVCMYUQQunPDiDnbYkyTnZOSU7VBihAiMAhPHBYERjOvRCQqhCdHpurQsAasZu_27PnvY9pz3mqf11tWeX-gZklUdX</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds</title><source>arXiv.org</source><creator>Yang, Chen ; Li, Peihao ; Zhou, Zanwei ; Yuan, Shanxin ; Liu, Bingbing ; Yang, Xiaokang ; Qiu, Weichao ; Shen, Wei</creator><creatorcontrib>Yang, Chen ; Li, Peihao ; Zhou, Zanwei ; Yuan, Shanxin ; Liu, Bingbing ; Yang, Xiaokang ; Qiu, Weichao ; Shen, Wei</creatorcontrib><description>We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room. NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views. To address this issue, we utilize the holistic priors, including pseudo depth maps and view coverage information, from neural reconstruction to guide the learning of implicit neural representations of 3D indoor scenes. Concretely, an off-the-shelf neural reconstruction method is leveraged to generate a geometry scaffold. Then, two loss functions based on the holistic priors are proposed to improve the learning of NeRF: 1) A robust depth loss that can tolerate the error of the pseudo depth map to guide the geometry learning of NeRF; 2) A variance loss to regularize the variance of implicit neural representations to reduce the geometry and color ambiguity in the learning procedure. These two loss functions are modulated during NeRF optimization according to the view coverage information to reduce the negative influence brought by the view coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms state-of-the-art view synthesis methods quantitatively and qualitatively on indoor scenes, achieving high-fidelity free navigation results.</description><identifier>DOI: 10.48550/arxiv.2304.06287</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2304.06287$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2304.06287$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Chen</creatorcontrib><creatorcontrib>Li, Peihao</creatorcontrib><creatorcontrib>Zhou, Zanwei</creatorcontrib><creatorcontrib>Yuan, Shanxin</creatorcontrib><creatorcontrib>Liu, Bingbing</creatorcontrib><creatorcontrib>Yang, Xiaokang</creatorcontrib><creatorcontrib>Qiu, Weichao</creatorcontrib><creatorcontrib>Shen, Wei</creatorcontrib><title>NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds</title><description>We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room. NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views. To address this issue, we utilize the holistic priors, including pseudo depth maps and view coverage information, from neural reconstruction to guide the learning of implicit neural representations of 3D indoor scenes. Concretely, an off-the-shelf neural reconstruction method is leveraged to generate a geometry scaffold. Then, two loss functions based on the holistic priors are proposed to improve the learning of NeRF: 1) A robust depth loss that can tolerate the error of the pseudo depth map to guide the geometry learning of NeRF; 2) A variance loss to regularize the variance of implicit neural representations to reduce the geometry and color ambiguity in the learning procedure. These two loss functions are modulated during NeRF optimization according to the view coverage information to reduce the negative influence brought by the view coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms state-of-the-art view synthesis methods quantitatively and qualitatively on indoor scenes, achieving high-fidelity free navigation results.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUhWEvDKjwAEz4BRJc27m22VBFClIpUlN1jW7sa2EpbZBTCnl7oDCd5deRPsZu5qLUtqrEHeavdCqlEroUIK25ZC9r2tS75p6v6SNjzzcYEh488TpRH0Yeh8zrTMR3iT55Mx2ObzSmkZ8S8iUNezrmiTceYxx-8it2EbEf6fp_Z2xbP24XT8Xqdfm8eFgVCMYUQQunPDiDnbYkyTnZOSU7VBihAiMAhPHBYERjOvRCQqhCdHpurQsAasZu_27PnvY9pz3mqf11tWeX-gZklUdX</recordid><startdate>20230413</startdate><enddate>20230413</enddate><creator>Yang, Chen</creator><creator>Li, Peihao</creator><creator>Zhou, Zanwei</creator><creator>Yuan, Shanxin</creator><creator>Liu, Bingbing</creator><creator>Yang, Xiaokang</creator><creator>Qiu, Weichao</creator><creator>Shen, Wei</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230413</creationdate><title>NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds</title><author>Yang, Chen ; Li, Peihao ; Zhou, Zanwei ; Yuan, Shanxin ; Liu, Bingbing ; Yang, Xiaokang ; Qiu, Weichao ; Shen, Wei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-d4093c697ab48e2e992b932ba3af656706607cd7afa77bac026d5df941889d663</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Chen</creatorcontrib><creatorcontrib>Li, Peihao</creatorcontrib><creatorcontrib>Zhou, Zanwei</creatorcontrib><creatorcontrib>Yuan, Shanxin</creatorcontrib><creatorcontrib>Liu, Bingbing</creatorcontrib><creatorcontrib>Yang, Xiaokang</creatorcontrib><creatorcontrib>Qiu, Weichao</creatorcontrib><creatorcontrib>Shen, Wei</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Chen</au><au>Li, Peihao</au><au>Zhou, Zanwei</au><au>Yuan, Shanxin</au><au>Liu, Bingbing</au><au>Yang, Xiaokang</au><au>Qiu, Weichao</au><au>Shen, Wei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds</atitle><date>2023-04-13</date><risdate>2023</risdate><abstract>We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room. NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views. To address this issue, we utilize the holistic priors, including pseudo depth maps and view coverage information, from neural reconstruction to guide the learning of implicit neural representations of 3D indoor scenes. Concretely, an off-the-shelf neural reconstruction method is leveraged to generate a geometry scaffold. Then, two loss functions based on the holistic priors are proposed to improve the learning of NeRF: 1) A robust depth loss that can tolerate the error of the pseudo depth map to guide the geometry learning of NeRF; 2) A variance loss to regularize the variance of implicit neural representations to reduce the geometry and color ambiguity in the learning procedure. These two loss functions are modulated during NeRF optimization according to the view coverage information to reduce the negative influence brought by the view coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms state-of-the-art view synthesis methods quantitatively and qualitatively on indoor scenes, achieving high-fidelity free navigation results.</abstract><doi>10.48550/arxiv.2304.06287</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2304.06287
ispartof
issn
language eng
recordid cdi_arxiv_primary_2304_06287
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T04%3A47%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=NeRFVS:%20Neural%20Radiance%20Fields%20for%20Free%20View%20Synthesis%20via%20Geometry%20Scaffolds&rft.au=Yang,%20Chen&rft.date=2023-04-13&rft_id=info:doi/10.48550/arxiv.2304.06287&rft_dat=%3Carxiv_GOX%3E2304_06287%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true