MultiDiff: Consistent Novel View Synthesis from a Single Image

We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature, as there exist multiple, plausible explanations for unobserved areas. To address this issu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-06
Hauptverfasser: Müller, Norman, Schwarz, Katja, Roessle, Barbara, Porzi, Lorenzo, Samuel Rota Bulò, Nießner, Matthias, Kontschieder, Peter
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Müller, Norman
Schwarz, Katja
Roessle, Barbara
Porzi, Lorenzo
Samuel Rota Bulò
Nießner, Matthias
Kontschieder, Peter
description We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature, as there exist multiple, plausible explanations for unobserved areas. To address this issue, we incorporate strong priors in form of monocular depth predictors and video-diffusion models. Monocular depth enables us to condition our model on warped reference images for the target views, increasing geometric stability. The video-diffusion prior provides a strong proxy for 3D scenes, allowing the model to learn continuous and pixel-accurate correspondences across generated images. In contrast to approaches relying on autoregressive image generation that are prone to drifts and error accumulation, MultiDiff jointly synthesizes a sequence of frames yielding high-quality and multi-view consistent results -- even for long-term scene generation with large camera movements, while reducing inference time by an order of magnitude. For additional consistency and image quality improvements, we introduce a novel, structured noise distribution. Our experimental results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet. Finally, our model naturally supports multi-view consistent editing without the need for further tuning.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3072926038</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3072926038</sourcerecordid><originalsourceid>FETCH-proquest_journals_30729260383</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw8y3NKcl0yUxLs1Jwzs8rziwuSc0rUfDLL0vNUQjLTC1XCK7MK8lIBUoopBXl5yokKgRn5qXnpCp45iamp_IwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyxgbmRpZGZgbGFMXGqAH3ON74</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3072926038</pqid></control><display><type>article</type><title>MultiDiff: Consistent Novel View Synthesis from a Single Image</title><source>Free E- Journals</source><creator>Müller, Norman ; Schwarz, Katja ; Roessle, Barbara ; Porzi, Lorenzo ; Samuel Rota Bulò ; Nießner, Matthias ; Kontschieder, Peter</creator><creatorcontrib>Müller, Norman ; Schwarz, Katja ; Roessle, Barbara ; Porzi, Lorenzo ; Samuel Rota Bulò ; Nießner, Matthias ; Kontschieder, Peter</creatorcontrib><description>We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature, as there exist multiple, plausible explanations for unobserved areas. To address this issue, we incorporate strong priors in form of monocular depth predictors and video-diffusion models. Monocular depth enables us to condition our model on warped reference images for the target views, increasing geometric stability. The video-diffusion prior provides a strong proxy for 3D scenes, allowing the model to learn continuous and pixel-accurate correspondences across generated images. In contrast to approaches relying on autoregressive image generation that are prone to drifts and error accumulation, MultiDiff jointly synthesizes a sequence of frames yielding high-quality and multi-view consistent results -- even for long-term scene generation with large camera movements, while reducing inference time by an order of magnitude. For additional consistency and image quality improvements, we introduce a novel, structured noise distribution. Our experimental results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet. Finally, our model naturally supports multi-view consistent editing without the need for further tuning.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Image contrast ; Image processing ; Image quality ; Scene generation ; Synthesis</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Müller, Norman</creatorcontrib><creatorcontrib>Schwarz, Katja</creatorcontrib><creatorcontrib>Roessle, Barbara</creatorcontrib><creatorcontrib>Porzi, Lorenzo</creatorcontrib><creatorcontrib>Samuel Rota Bulò</creatorcontrib><creatorcontrib>Nießner, Matthias</creatorcontrib><creatorcontrib>Kontschieder, Peter</creatorcontrib><title>MultiDiff: Consistent Novel View Synthesis from a Single Image</title><title>arXiv.org</title><description>We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature, as there exist multiple, plausible explanations for unobserved areas. To address this issue, we incorporate strong priors in form of monocular depth predictors and video-diffusion models. Monocular depth enables us to condition our model on warped reference images for the target views, increasing geometric stability. The video-diffusion prior provides a strong proxy for 3D scenes, allowing the model to learn continuous and pixel-accurate correspondences across generated images. In contrast to approaches relying on autoregressive image generation that are prone to drifts and error accumulation, MultiDiff jointly synthesizes a sequence of frames yielding high-quality and multi-view consistent results -- even for long-term scene generation with large camera movements, while reducing inference time by an order of magnitude. For additional consistency and image quality improvements, we introduce a novel, structured noise distribution. Our experimental results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet. Finally, our model naturally supports multi-view consistent editing without the need for further tuning.</description><subject>Image contrast</subject><subject>Image processing</subject><subject>Image quality</subject><subject>Scene generation</subject><subject>Synthesis</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSw8y3NKcl0yUxLs1Jwzs8rziwuSc0rUfDLL0vNUQjLTC1XCK7MK8lIBUoopBXl5yokKgRn5qXnpCp45iamp_IwsKYl5hSn8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSojygVLyxgbmRpZGZgbGFMXGqAH3ON74</recordid><startdate>20240626</startdate><enddate>20240626</enddate><creator>Müller, Norman</creator><creator>Schwarz, Katja</creator><creator>Roessle, Barbara</creator><creator>Porzi, Lorenzo</creator><creator>Samuel Rota Bulò</creator><creator>Nießner, Matthias</creator><creator>Kontschieder, Peter</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240626</creationdate><title>MultiDiff: Consistent Novel View Synthesis from a Single Image</title><author>Müller, Norman ; Schwarz, Katja ; Roessle, Barbara ; Porzi, Lorenzo ; Samuel Rota Bulò ; Nießner, Matthias ; Kontschieder, Peter</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30729260383</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Image contrast</topic><topic>Image processing</topic><topic>Image quality</topic><topic>Scene generation</topic><topic>Synthesis</topic><toplevel>online_resources</toplevel><creatorcontrib>Müller, Norman</creatorcontrib><creatorcontrib>Schwarz, Katja</creatorcontrib><creatorcontrib>Roessle, Barbara</creatorcontrib><creatorcontrib>Porzi, Lorenzo</creatorcontrib><creatorcontrib>Samuel Rota Bulò</creatorcontrib><creatorcontrib>Nießner, Matthias</creatorcontrib><creatorcontrib>Kontschieder, Peter</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Müller, Norman</au><au>Schwarz, Katja</au><au>Roessle, Barbara</au><au>Porzi, Lorenzo</au><au>Samuel Rota Bulò</au><au>Nießner, Matthias</au><au>Kontschieder, Peter</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>MultiDiff: Consistent Novel View Synthesis from a Single Image</atitle><jtitle>arXiv.org</jtitle><date>2024-06-26</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We introduce MultiDiff, a novel approach for consistent novel view synthesis of scenes from a single RGB image. The task of synthesizing novel views from a single reference image is highly ill-posed by nature, as there exist multiple, plausible explanations for unobserved areas. To address this issue, we incorporate strong priors in form of monocular depth predictors and video-diffusion models. Monocular depth enables us to condition our model on warped reference images for the target views, increasing geometric stability. The video-diffusion prior provides a strong proxy for 3D scenes, allowing the model to learn continuous and pixel-accurate correspondences across generated images. In contrast to approaches relying on autoregressive image generation that are prone to drifts and error accumulation, MultiDiff jointly synthesizes a sequence of frames yielding high-quality and multi-view consistent results -- even for long-term scene generation with large camera movements, while reducing inference time by an order of magnitude. For additional consistency and image quality improvements, we introduce a novel, structured noise distribution. Our experimental results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet. Finally, our model naturally supports multi-view consistent editing without the need for further tuning.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_3072926038
source Free E- Journals
subjects Image contrast
Image processing
Image quality
Scene generation
Synthesis
title MultiDiff: Consistent Novel View Synthesis from a Single Image
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T07%3A30%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=MultiDiff:%20Consistent%20Novel%20View%20Synthesis%20from%20a%20Single%20Image&rft.jtitle=arXiv.org&rft.au=M%C3%BCller,%20Norman&rft.date=2024-06-26&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3072926038%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3072926038&rft_id=info:pmid/&rfr_iscdi=true