Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation

We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every point in a scene within a neural network. Since the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lazova, Verica, Guzov, Vladimir, Olszewski, Kyle, Tulyakov, Sergey, Pons-Moll, Gerard
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lazova, Verica
Guzov, Vladimir
Olszewski, Kyle
Tulyakov, Sergey
Pons-Moll, Gerard
description We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every point in a scene within a neural network. Since these models are scene-specific and lack a 3D scene representation, classical editing such as shape manipulation, or combining scenes is not possible. Hence, editing and combining NeRF-based scenes has not been demonstrated. With the aim of obtaining interpretable and controllable scene representations, our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network. With this hybrid representation, we decouple neural rendering from scene-specific geometry and appearance. We can generalize to novel scenes by optimizing only the scene-specific 3D feature representation, while keeping the parameters of the rendering network fixed. The rendering function learnt during the initial training stage can thus be easily applied to new scenes, making our approach more flexible. More importantly, since the feature volumes are independent of the rendering model, we can manipulate and combine scenes by editing their corresponding feature volumes. The edited volume can then be plugged into the rendering model to synthesize high-quality novel views. We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
doi_str_mv 10.48550/arxiv.2204.10850
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2204_10850</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2204_10850</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-c1ff3020ae799e890840ee1231fb9b354cc7b14aa0e764938da55cf54e412d9c3</originalsourceid><addsrcrecordid>eNotz7FOwzAUQFEvDKjwAUz1DyQ8x3YTs6GoAUQBqa1YoxfnGVly7cpNEPw9ojDd7UqHsRsBpWq0hlvMX_6zrCpQpYBGwyV7blOccgrFK227O74e_YRDIN4RTnMm_p7CfKATdynznaVIfEtxpOzjB8c48heM_jgHnHyKV-zCYTjR9X8XbN-t9-1jsXl7eGrvNwWuaiiscE5CBUi1MdQYaBQQiUoKN5hBamVtPQiFCFSvlJHNiFpbpxUpUY3GygVb_m3Pmv6Y_QHzd_-r6s8q-QMZlEcW</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation</title><source>arXiv.org</source><creator>Lazova, Verica ; Guzov, Vladimir ; Olszewski, Kyle ; Tulyakov, Sergey ; Pons-Moll, Gerard</creator><creatorcontrib>Lazova, Verica ; Guzov, Vladimir ; Olszewski, Kyle ; Tulyakov, Sergey ; Pons-Moll, Gerard</creatorcontrib><description>We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every point in a scene within a neural network. Since these models are scene-specific and lack a 3D scene representation, classical editing such as shape manipulation, or combining scenes is not possible. Hence, editing and combining NeRF-based scenes has not been demonstrated. With the aim of obtaining interpretable and controllable scene representations, our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network. With this hybrid representation, we decouple neural rendering from scene-specific geometry and appearance. We can generalize to novel scenes by optimizing only the scene-specific 3D feature representation, while keeping the parameters of the rendering network fixed. The rendering function learnt during the initial training stage can thus be easily applied to new scenes, making our approach more flexible. More importantly, since the feature volumes are independent of the rendering model, we can manipulate and combine scenes by editing their corresponding feature volumes. The edited volume can then be plugged into the rendering model to synthesize high-quality novel views. We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.</description><identifier>DOI: 10.48550/arxiv.2204.10850</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2204.10850$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2204.10850$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lazova, Verica</creatorcontrib><creatorcontrib>Guzov, Vladimir</creatorcontrib><creatorcontrib>Olszewski, Kyle</creatorcontrib><creatorcontrib>Tulyakov, Sergey</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><title>Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation</title><description>We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every point in a scene within a neural network. Since these models are scene-specific and lack a 3D scene representation, classical editing such as shape manipulation, or combining scenes is not possible. Hence, editing and combining NeRF-based scenes has not been demonstrated. With the aim of obtaining interpretable and controllable scene representations, our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network. With this hybrid representation, we decouple neural rendering from scene-specific geometry and appearance. We can generalize to novel scenes by optimizing only the scene-specific 3D feature representation, while keeping the parameters of the rendering network fixed. The rendering function learnt during the initial training stage can thus be easily applied to new scenes, making our approach more flexible. More importantly, since the feature volumes are independent of the rendering model, we can manipulate and combine scenes by editing their corresponding feature volumes. The edited volume can then be plugged into the rendering model to synthesize high-quality novel views. We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUQFEvDKjwAUz1DyQ8x3YTs6GoAUQBqa1YoxfnGVly7cpNEPw9ojDd7UqHsRsBpWq0hlvMX_6zrCpQpYBGwyV7blOccgrFK227O74e_YRDIN4RTnMm_p7CfKATdynznaVIfEtxpOzjB8c48heM_jgHnHyKV-zCYTjR9X8XbN-t9-1jsXl7eGrvNwWuaiiscE5CBUi1MdQYaBQQiUoKN5hBamVtPQiFCFSvlJHNiFpbpxUpUY3GygVb_m3Pmv6Y_QHzd_-r6s8q-QMZlEcW</recordid><startdate>20220422</startdate><enddate>20220422</enddate><creator>Lazova, Verica</creator><creator>Guzov, Vladimir</creator><creator>Olszewski, Kyle</creator><creator>Tulyakov, Sergey</creator><creator>Pons-Moll, Gerard</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220422</creationdate><title>Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation</title><author>Lazova, Verica ; Guzov, Vladimir ; Olszewski, Kyle ; Tulyakov, Sergey ; Pons-Moll, Gerard</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-c1ff3020ae799e890840ee1231fb9b354cc7b14aa0e764938da55cf54e412d9c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Lazova, Verica</creatorcontrib><creatorcontrib>Guzov, Vladimir</creatorcontrib><creatorcontrib>Olszewski, Kyle</creatorcontrib><creatorcontrib>Tulyakov, Sergey</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lazova, Verica</au><au>Guzov, Vladimir</au><au>Olszewski, Kyle</au><au>Tulyakov, Sergey</au><au>Pons-Moll, Gerard</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation</atitle><date>2022-04-22</date><risdate>2022</risdate><abstract>We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every point in a scene within a neural network. Since these models are scene-specific and lack a 3D scene representation, classical editing such as shape manipulation, or combining scenes is not possible. Hence, editing and combining NeRF-based scenes has not been demonstrated. With the aim of obtaining interpretable and controllable scene representations, our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network. With this hybrid representation, we decouple neural rendering from scene-specific geometry and appearance. We can generalize to novel scenes by optimizing only the scene-specific 3D feature representation, while keeping the parameters of the rendering network fixed. The rendering function learnt during the initial training stage can thus be easily applied to new scenes, making our approach more flexible. More importantly, since the feature volumes are independent of the rendering model, we can manipulate and combine scenes by editing their corresponding feature volumes. The edited volume can then be plugged into the rendering model to synthesize high-quality novel views. We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.</abstract><doi>10.48550/arxiv.2204.10850</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2204.10850
ispartof
issn
language eng
recordid cdi_arxiv_primary_2204_10850
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T07%3A21%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Control-NeRF:%20Editable%20Feature%20Volumes%20for%20Scene%20Rendering%20and%20Manipulation&rft.au=Lazova,%20Verica&rft.date=2022-04-22&rft_id=info:doi/10.48550/arxiv.2204.10850&rft_dat=%3Carxiv_GOX%3E2204_10850%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true