UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene
3D scenes photorealistic stylization aims to generate photorealistic images from arbitrary novel views according to a given style image while ensuring consistency when rendering from different viewpoints. Some existing stylization methods with neural radiance fields can effectively predict stylized...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-08 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Chen, Yaosen Yuan, Qi Li, Zhiqiang Liu, Yuegen Wang, Wei Xie, Chaoping Wen, Xuming Yu, Qien |
description | 3D scenes photorealistic stylization aims to generate photorealistic images from arbitrary novel views according to a given style image while ensuring consistency when rendering from different viewpoints. Some existing stylization methods with neural radiance fields can effectively predict stylized scenes by combining the features of the style image with multi-view images to train 3D scenes. However, these methods generate novel view images that contain objectionable artifacts. Besides, they cannot achieve universal photorealistic stylization for a 3D scene. Therefore, a styling image must retrain a 3D scene representation network based on a neural radiation field. We propose a novel 3D scene photorealistic style transfer framework to address these issues. It can realize photorealistic 3D scene style transfer with a 2D style image. We first pre-trained a 2D photorealistic style transfer network, which can meet the photorealistic style transfer between any given content image and style image. Then, we use voxel features to optimize a 3D scene and get the geometric representation of the scene. Finally, we jointly optimize a hyper network to realize the scene photorealistic style transfer of arbitrary style images. In the transfer stage, we use a pre-trained 2D photorealistic network to constrain the photorealistic style of different views and different style images in the 3D scene. The experimental results show that our method not only realizes the 3D photorealistic style transfer of arbitrary style images but also outperforms the existing methods in terms of visual quality and consistency. Project page:https://semchan.github.io/UPST_NeRF. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2702669319</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2702669319</sourcerecordid><originalsourceid>FETCH-proquest_journals_27026693193</originalsourceid><addsrcrecordid>eNqNjk8LgjAcQEcQJOV3-EFnQbfU7FpJJxH_HDrJ0J80GVttM-jb56EP0Okd3ju8FfEoY1FwPFC6Ib61UxiGNElpHDOP3NuyboICq_wErRJvNJZLKB_aaYNcCutED7X7SITGcGVHNKBHKHA2S1fxQXDVI-QC5WBh1AbYBeoeFe7IeuTSov_jluzza3O-BU-jXzNa1016NmpRHU2XoSRjUcb-q74ZsEDF</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2702669319</pqid></control><display><type>article</type><title>UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene</title><source>Free E- Journals</source><creator>Chen, Yaosen ; Yuan, Qi ; Li, Zhiqiang ; Liu, Yuegen ; Wang, Wei ; Xie, Chaoping ; Wen, Xuming ; Yu, Qien</creator><creatorcontrib>Chen, Yaosen ; Yuan, Qi ; Li, Zhiqiang ; Liu, Yuegen ; Wang, Wei ; Xie, Chaoping ; Wen, Xuming ; Yu, Qien</creatorcontrib><description>3D scenes photorealistic stylization aims to generate photorealistic images from arbitrary novel views according to a given style image while ensuring consistency when rendering from different viewpoints. Some existing stylization methods with neural radiance fields can effectively predict stylized scenes by combining the features of the style image with multi-view images to train 3D scenes. However, these methods generate novel view images that contain objectionable artifacts. Besides, they cannot achieve universal photorealistic stylization for a 3D scene. Therefore, a styling image must retrain a 3D scene representation network based on a neural radiation field. We propose a novel 3D scene photorealistic style transfer framework to address these issues. It can realize photorealistic 3D scene style transfer with a 2D style image. We first pre-trained a 2D photorealistic style transfer network, which can meet the photorealistic style transfer between any given content image and style image. Then, we use voxel features to optimize a 3D scene and get the geometric representation of the scene. Finally, we jointly optimize a hyper network to realize the scene photorealistic style transfer of arbitrary style images. In the transfer stage, we use a pre-trained 2D photorealistic network to constrain the photorealistic style of different views and different style images in the 3D scene. The experimental results show that our method not only realizes the 3D photorealistic style transfer of arbitrary style images but also outperforms the existing methods in terms of visual quality and consistency. Project page:https://semchan.github.io/UPST_NeRF.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Consistency ; Radiance ; Representations ; Styling</subject><ispartof>arXiv.org, 2022-08</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Chen, Yaosen</creatorcontrib><creatorcontrib>Yuan, Qi</creatorcontrib><creatorcontrib>Li, Zhiqiang</creatorcontrib><creatorcontrib>Liu, Yuegen</creatorcontrib><creatorcontrib>Wang, Wei</creatorcontrib><creatorcontrib>Xie, Chaoping</creatorcontrib><creatorcontrib>Wen, Xuming</creatorcontrib><creatorcontrib>Yu, Qien</creatorcontrib><title>UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene</title><title>arXiv.org</title><description>3D scenes photorealistic stylization aims to generate photorealistic images from arbitrary novel views according to a given style image while ensuring consistency when rendering from different viewpoints. Some existing stylization methods with neural radiance fields can effectively predict stylized scenes by combining the features of the style image with multi-view images to train 3D scenes. However, these methods generate novel view images that contain objectionable artifacts. Besides, they cannot achieve universal photorealistic stylization for a 3D scene. Therefore, a styling image must retrain a 3D scene representation network based on a neural radiation field. We propose a novel 3D scene photorealistic style transfer framework to address these issues. It can realize photorealistic 3D scene style transfer with a 2D style image. We first pre-trained a 2D photorealistic style transfer network, which can meet the photorealistic style transfer between any given content image and style image. Then, we use voxel features to optimize a 3D scene and get the geometric representation of the scene. Finally, we jointly optimize a hyper network to realize the scene photorealistic style transfer of arbitrary style images. In the transfer stage, we use a pre-trained 2D photorealistic network to constrain the photorealistic style of different views and different style images in the 3D scene. The experimental results show that our method not only realizes the 3D photorealistic style transfer of arbitrary style images but also outperforms the existing methods in terms of visual quality and consistency. Project page:https://semchan.github.io/UPST_NeRF.</description><subject>Consistency</subject><subject>Radiance</subject><subject>Representations</subject><subject>Styling</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjk8LgjAcQEcQJOV3-EFnQbfU7FpJJxH_HDrJ0J80GVttM-jb56EP0Okd3ju8FfEoY1FwPFC6Ib61UxiGNElpHDOP3NuyboICq_wErRJvNJZLKB_aaYNcCutED7X7SITGcGVHNKBHKHA2S1fxQXDVI-QC5WBh1AbYBeoeFe7IeuTSov_jluzza3O-BU-jXzNa1016NmpRHU2XoSRjUcb-q74ZsEDF</recordid><startdate>20220821</startdate><enddate>20220821</enddate><creator>Chen, Yaosen</creator><creator>Yuan, Qi</creator><creator>Li, Zhiqiang</creator><creator>Liu, Yuegen</creator><creator>Wang, Wei</creator><creator>Xie, Chaoping</creator><creator>Wen, Xuming</creator><creator>Yu, Qien</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220821</creationdate><title>UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene</title><author>Chen, Yaosen ; Yuan, Qi ; Li, Zhiqiang ; Liu, Yuegen ; Wang, Wei ; Xie, Chaoping ; Wen, Xuming ; Yu, Qien</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27026693193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Consistency</topic><topic>Radiance</topic><topic>Representations</topic><topic>Styling</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Yaosen</creatorcontrib><creatorcontrib>Yuan, Qi</creatorcontrib><creatorcontrib>Li, Zhiqiang</creatorcontrib><creatorcontrib>Liu, Yuegen</creatorcontrib><creatorcontrib>Wang, Wei</creatorcontrib><creatorcontrib>Xie, Chaoping</creatorcontrib><creatorcontrib>Wen, Xuming</creatorcontrib><creatorcontrib>Yu, Qien</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Yaosen</au><au>Yuan, Qi</au><au>Li, Zhiqiang</au><au>Liu, Yuegen</au><au>Wang, Wei</au><au>Xie, Chaoping</au><au>Wen, Xuming</au><au>Yu, Qien</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene</atitle><jtitle>arXiv.org</jtitle><date>2022-08-21</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>3D scenes photorealistic stylization aims to generate photorealistic images from arbitrary novel views according to a given style image while ensuring consistency when rendering from different viewpoints. Some existing stylization methods with neural radiance fields can effectively predict stylized scenes by combining the features of the style image with multi-view images to train 3D scenes. However, these methods generate novel view images that contain objectionable artifacts. Besides, they cannot achieve universal photorealistic stylization for a 3D scene. Therefore, a styling image must retrain a 3D scene representation network based on a neural radiation field. We propose a novel 3D scene photorealistic style transfer framework to address these issues. It can realize photorealistic 3D scene style transfer with a 2D style image. We first pre-trained a 2D photorealistic style transfer network, which can meet the photorealistic style transfer between any given content image and style image. Then, we use voxel features to optimize a 3D scene and get the geometric representation of the scene. Finally, we jointly optimize a hyper network to realize the scene photorealistic style transfer of arbitrary style images. In the transfer stage, we use a pre-trained 2D photorealistic network to constrain the photorealistic style of different views and different style images in the 3D scene. The experimental results show that our method not only realizes the 3D photorealistic style transfer of arbitrary style images but also outperforms the existing methods in terms of visual quality and consistency. Project page:https://semchan.github.io/UPST_NeRF.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2702669319 |
source | Free E- Journals |
subjects | Consistency Radiance Representations Styling |
title | UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance Fields for 3D Scene |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T05%3A42%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=UPST-NeRF:%20Universal%20Photorealistic%20Style%20Transfer%20of%20Neural%20Radiance%20Fields%20for%203D%20Scene&rft.jtitle=arXiv.org&rft.au=Chen,%20Yaosen&rft.date=2022-08-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2702669319%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2702669319&rft_id=info:pmid/&rfr_iscdi=true |