Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering

Neural radiance fields~(NeRF) have recently been applied to render large-scale scenes. However, their limited model capacity typically results in blurred rendering results. Existing large-scale NeRFs primarily address this limitation by partitioning the scene into blocks, which are subsequently hand...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shao, Mingqi, Xiong, Feng, Zhang, Hang, Yang, Shuang, Xu, Mu, Bian, Wei, Wang, Xueqian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shao, Mingqi
Xiong, Feng
Zhang, Hang
Yang, Shuang
Xu, Mu
Bian, Wei
Wang, Xueqian
description Neural radiance fields~(NeRF) have recently been applied to render large-scale scenes. However, their limited model capacity typically results in blurred rendering results. Existing large-scale NeRFs primarily address this limitation by partitioning the scene into blocks, which are subsequently handled by separate sub-NeRFs. These sub-NeRFs, trained from scratch and processed independently, lead to inconsistencies in geometry and appearance across the scene. Consequently, the rendering quality fails to exhibit significant improvement despite the expansion of model capacity. In this work, we present global-guided focal neural radiance field (GF-NeRF) that achieves high-fidelity rendering of large-scale scenes. Our proposed GF-NeRF utilizes a two-stage (Global and Focal) architecture and a global-guided training strategy. The global stage obtains a continuous representation of the entire scene while the focal stage decomposes the scene into multiple blocks and further processes them with distinct sub-encoders. Leveraging this two-stage architecture, sub-encoders only need fine-tuning based on the global encoder, thus reducing training complexity in the focal stage while maintaining scene-wide consistency. Spatial information and error information from the global stage also benefit the sub-encoders to focus on crucial areas and effectively capture more details of large-scale scenes. Notably, our approach does not rely on any prior knowledge about the target scene, attributing GF-NeRF adaptable to various large-scale scene types, including street-view and aerial-view scenes. We demonstrate that our method achieves high-fidelity, natural rendering results on various types of large-scale datasets. Our project page: https://shaomq2187.github.io/GF-NeRF/
doi_str_mv 10.48550/arxiv.2403.12839
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_12839</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_12839</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-fbad4ac537f5eacb1785ec82157df119aad6c4d54a5f6af9cc5caea37f36bb73</originalsourceid><addsrcrecordid>eNotj0FLwzAYQHPxINMf4Mn8gdSmSZr2KMNug6KweS9fvnwpgdhKxsb896vT07s8HjzGnmRZ6MaY8gXyJZ6LSpeqkFWj2nu226TZQRLjKXryvJsREn-nU16wBx9hQuJdpOR5mDPvIY8kjotE_IA0Ed_T5CnHaXxgdwHSkR7_uWKH7u1zvRX9x2a3fu0F1LYVwYHXgEbZYAjQSdsYwqaSxvogZQvga9TeaDChhtAiGgSCRVe1c1at2PNf9bYyfOf4Bfln-F0abkvqCnDdR3o</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering</title><source>arXiv.org</source><creator>Shao, Mingqi ; Xiong, Feng ; Zhang, Hang ; Yang, Shuang ; Xu, Mu ; Bian, Wei ; Wang, Xueqian</creator><creatorcontrib>Shao, Mingqi ; Xiong, Feng ; Zhang, Hang ; Yang, Shuang ; Xu, Mu ; Bian, Wei ; Wang, Xueqian</creatorcontrib><description>Neural radiance fields~(NeRF) have recently been applied to render large-scale scenes. However, their limited model capacity typically results in blurred rendering results. Existing large-scale NeRFs primarily address this limitation by partitioning the scene into blocks, which are subsequently handled by separate sub-NeRFs. These sub-NeRFs, trained from scratch and processed independently, lead to inconsistencies in geometry and appearance across the scene. Consequently, the rendering quality fails to exhibit significant improvement despite the expansion of model capacity. In this work, we present global-guided focal neural radiance field (GF-NeRF) that achieves high-fidelity rendering of large-scale scenes. Our proposed GF-NeRF utilizes a two-stage (Global and Focal) architecture and a global-guided training strategy. The global stage obtains a continuous representation of the entire scene while the focal stage decomposes the scene into multiple blocks and further processes them with distinct sub-encoders. Leveraging this two-stage architecture, sub-encoders only need fine-tuning based on the global encoder, thus reducing training complexity in the focal stage while maintaining scene-wide consistency. Spatial information and error information from the global stage also benefit the sub-encoders to focus on crucial areas and effectively capture more details of large-scale scenes. Notably, our approach does not rely on any prior knowledge about the target scene, attributing GF-NeRF adaptable to various large-scale scene types, including street-view and aerial-view scenes. We demonstrate that our method achieves high-fidelity, natural rendering results on various types of large-scale datasets. Our project page: https://shaomq2187.github.io/GF-NeRF/</description><identifier>DOI: 10.48550/arxiv.2403.12839</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.12839$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.12839$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shao, Mingqi</creatorcontrib><creatorcontrib>Xiong, Feng</creatorcontrib><creatorcontrib>Zhang, Hang</creatorcontrib><creatorcontrib>Yang, Shuang</creatorcontrib><creatorcontrib>Xu, Mu</creatorcontrib><creatorcontrib>Bian, Wei</creatorcontrib><creatorcontrib>Wang, Xueqian</creatorcontrib><title>Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering</title><description>Neural radiance fields~(NeRF) have recently been applied to render large-scale scenes. However, their limited model capacity typically results in blurred rendering results. Existing large-scale NeRFs primarily address this limitation by partitioning the scene into blocks, which are subsequently handled by separate sub-NeRFs. These sub-NeRFs, trained from scratch and processed independently, lead to inconsistencies in geometry and appearance across the scene. Consequently, the rendering quality fails to exhibit significant improvement despite the expansion of model capacity. In this work, we present global-guided focal neural radiance field (GF-NeRF) that achieves high-fidelity rendering of large-scale scenes. Our proposed GF-NeRF utilizes a two-stage (Global and Focal) architecture and a global-guided training strategy. The global stage obtains a continuous representation of the entire scene while the focal stage decomposes the scene into multiple blocks and further processes them with distinct sub-encoders. Leveraging this two-stage architecture, sub-encoders only need fine-tuning based on the global encoder, thus reducing training complexity in the focal stage while maintaining scene-wide consistency. Spatial information and error information from the global stage also benefit the sub-encoders to focus on crucial areas and effectively capture more details of large-scale scenes. Notably, our approach does not rely on any prior knowledge about the target scene, attributing GF-NeRF adaptable to various large-scale scene types, including street-view and aerial-view scenes. We demonstrate that our method achieves high-fidelity, natural rendering results on various types of large-scale datasets. Our project page: https://shaomq2187.github.io/GF-NeRF/</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj0FLwzAYQHPxINMf4Mn8gdSmSZr2KMNug6KweS9fvnwpgdhKxsb896vT07s8HjzGnmRZ6MaY8gXyJZ6LSpeqkFWj2nu226TZQRLjKXryvJsREn-nU16wBx9hQuJdpOR5mDPvIY8kjotE_IA0Ed_T5CnHaXxgdwHSkR7_uWKH7u1zvRX9x2a3fu0F1LYVwYHXgEbZYAjQSdsYwqaSxvogZQvga9TeaDChhtAiGgSCRVe1c1at2PNf9bYyfOf4Bfln-F0abkvqCnDdR3o</recordid><startdate>20240319</startdate><enddate>20240319</enddate><creator>Shao, Mingqi</creator><creator>Xiong, Feng</creator><creator>Zhang, Hang</creator><creator>Yang, Shuang</creator><creator>Xu, Mu</creator><creator>Bian, Wei</creator><creator>Wang, Xueqian</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240319</creationdate><title>Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering</title><author>Shao, Mingqi ; Xiong, Feng ; Zhang, Hang ; Yang, Shuang ; Xu, Mu ; Bian, Wei ; Wang, Xueqian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-fbad4ac537f5eacb1785ec82157df119aad6c4d54a5f6af9cc5caea37f36bb73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Shao, Mingqi</creatorcontrib><creatorcontrib>Xiong, Feng</creatorcontrib><creatorcontrib>Zhang, Hang</creatorcontrib><creatorcontrib>Yang, Shuang</creatorcontrib><creatorcontrib>Xu, Mu</creatorcontrib><creatorcontrib>Bian, Wei</creatorcontrib><creatorcontrib>Wang, Xueqian</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shao, Mingqi</au><au>Xiong, Feng</au><au>Zhang, Hang</au><au>Yang, Shuang</au><au>Xu, Mu</au><au>Bian, Wei</au><au>Wang, Xueqian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering</atitle><date>2024-03-19</date><risdate>2024</risdate><abstract>Neural radiance fields~(NeRF) have recently been applied to render large-scale scenes. However, their limited model capacity typically results in blurred rendering results. Existing large-scale NeRFs primarily address this limitation by partitioning the scene into blocks, which are subsequently handled by separate sub-NeRFs. These sub-NeRFs, trained from scratch and processed independently, lead to inconsistencies in geometry and appearance across the scene. Consequently, the rendering quality fails to exhibit significant improvement despite the expansion of model capacity. In this work, we present global-guided focal neural radiance field (GF-NeRF) that achieves high-fidelity rendering of large-scale scenes. Our proposed GF-NeRF utilizes a two-stage (Global and Focal) architecture and a global-guided training strategy. The global stage obtains a continuous representation of the entire scene while the focal stage decomposes the scene into multiple blocks and further processes them with distinct sub-encoders. Leveraging this two-stage architecture, sub-encoders only need fine-tuning based on the global encoder, thus reducing training complexity in the focal stage while maintaining scene-wide consistency. Spatial information and error information from the global stage also benefit the sub-encoders to focus on crucial areas and effectively capture more details of large-scale scenes. Notably, our approach does not rely on any prior knowledge about the target scene, attributing GF-NeRF adaptable to various large-scale scene types, including street-view and aerial-view scenes. We demonstrate that our method achieves high-fidelity, natural rendering results on various types of large-scale datasets. Our project page: https://shaomq2187.github.io/GF-NeRF/</abstract><doi>10.48550/arxiv.2403.12839</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.12839
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_12839
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T00%3A48%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Global-guided%20Focal%20Neural%20Radiance%20Field%20for%20Large-scale%20Scene%20Rendering&rft.au=Shao,%20Mingqi&rft.date=2024-03-19&rft_id=info:doi/10.48550/arxiv.2403.12839&rft_dat=%3Carxiv_GOX%3E2403_12839%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true