DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion

In this paper, we introduce \textbf{DimensionX}, a framework designed to generate photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach begins with the insight that both the spatial structure of a 3D scene and the temporal evolution of a 4D scene can be effectiv...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-11
Hauptverfasser: Sun, Wenqiang, Chen, Shuo, Liu, Fangfu, Chen, Zilong, Duan, Yueqi, Zhang, Jun, Wang, Yikai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Sun, Wenqiang
Chen, Shuo
Liu, Fangfu
Chen, Zilong
Duan, Yueqi
Zhang, Jun
Wang, Yikai
description In this paper, we introduce \textbf{DimensionX}, a framework designed to generate photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach begins with the insight that both the spatial structure of a 3D scene and the temporal evolution of a 4D scene can be effectively represented through sequences of video frames. While recent video diffusion models have shown remarkable success in producing vivid visuals, they face limitations in directly recovering 3D/4D scenes due to limited spatial and temporal controllability during generation. To overcome this, we propose ST-Director, which decouples spatial and temporal factors in video diffusion by learning dimension-aware LoRAs from dimension-variant data. This controllable video diffusion approach enables precise manipulation of spatial structure and temporal dynamics, allowing us to reconstruct both 3D and 4D representations from sequential frames with the combination of spatial and temporal dimensions. Additionally, to bridge the gap between generated videos and real-world scenes, we introduce a trajectory-aware mechanism for 3D generation and an identity-preserving denoising strategy for 4D generation. Extensive experiments on various real-world and synthetic datasets demonstrate that DimensionX achieves superior results in controllable video generation, as well as in 3D and 4D scene generation, compared with previous methods.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3126151903</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3126151903</sourcerecordid><originalsourceid>FETCH-proquest_journals_31261519033</originalsourceid><addsrcrecordid>eNqNjF0LgjAYRkcQJOV_eKFrQTe1j7vQoq6N6CZk5Tub6FbbJPr3GfQDunrgnMMzIh5lLAqWMaUT4lvbhGFI0wVNEuaRSy47VFZqdV5DZpA7hI16A8uBqwriHIobKrQgjO6AQyFV3SIcOl4jvKS7Q6aVM7pt-XXgJ1mhhlwK0X8_Z2QseGvR_-2UzHfbY7YPHkY_e7SubHRv1KBKFtE0SqJVyNh_1QfJp0Gs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3126151903</pqid></control><display><type>article</type><title>DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion</title><source>Free E- Journals</source><creator>Sun, Wenqiang ; Chen, Shuo ; Liu, Fangfu ; Chen, Zilong ; Duan, Yueqi ; Zhang, Jun ; Wang, Yikai</creator><creatorcontrib>Sun, Wenqiang ; Chen, Shuo ; Liu, Fangfu ; Chen, Zilong ; Duan, Yueqi ; Zhang, Jun ; Wang, Yikai</creatorcontrib><description>In this paper, we introduce \textbf{DimensionX}, a framework designed to generate photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach begins with the insight that both the spatial structure of a 3D scene and the temporal evolution of a 4D scene can be effectively represented through sequences of video frames. While recent video diffusion models have shown remarkable success in producing vivid visuals, they face limitations in directly recovering 3D/4D scenes due to limited spatial and temporal controllability during generation. To overcome this, we propose ST-Director, which decouples spatial and temporal factors in video diffusion by learning dimension-aware LoRAs from dimension-variant data. This controllable video diffusion approach enables precise manipulation of spatial structure and temporal dynamics, allowing us to reconstruct both 3D and 4D representations from sequential frames with the combination of spatial and temporal dimensions. Additionally, to bridge the gap between generated videos and real-world scenes, we introduce a trajectory-aware mechanism for 3D generation and an identity-preserving denoising strategy for 4D generation. Extensive experiments on various real-world and synthetic datasets demonstrate that DimensionX achieves superior results in controllable video generation, as well as in 3D and 4D scene generation, compared with previous methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Controllability ; Frames (data processing) ; Scene generation ; Spatiotemporal data ; Synthetic data</subject><ispartof>arXiv.org, 2024-11</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Sun, Wenqiang</creatorcontrib><creatorcontrib>Chen, Shuo</creatorcontrib><creatorcontrib>Liu, Fangfu</creatorcontrib><creatorcontrib>Chen, Zilong</creatorcontrib><creatorcontrib>Duan, Yueqi</creatorcontrib><creatorcontrib>Zhang, Jun</creatorcontrib><creatorcontrib>Wang, Yikai</creatorcontrib><title>DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion</title><title>arXiv.org</title><description>In this paper, we introduce \textbf{DimensionX}, a framework designed to generate photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach begins with the insight that both the spatial structure of a 3D scene and the temporal evolution of a 4D scene can be effectively represented through sequences of video frames. While recent video diffusion models have shown remarkable success in producing vivid visuals, they face limitations in directly recovering 3D/4D scenes due to limited spatial and temporal controllability during generation. To overcome this, we propose ST-Director, which decouples spatial and temporal factors in video diffusion by learning dimension-aware LoRAs from dimension-variant data. This controllable video diffusion approach enables precise manipulation of spatial structure and temporal dynamics, allowing us to reconstruct both 3D and 4D representations from sequential frames with the combination of spatial and temporal dimensions. Additionally, to bridge the gap between generated videos and real-world scenes, we introduce a trajectory-aware mechanism for 3D generation and an identity-preserving denoising strategy for 4D generation. Extensive experiments on various real-world and synthetic datasets demonstrate that DimensionX achieves superior results in controllable video generation, as well as in 3D and 4D scene generation, compared with previous methods.</description><subject>Controllability</subject><subject>Frames (data processing)</subject><subject>Scene generation</subject><subject>Spatiotemporal data</subject><subject>Synthetic data</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjF0LgjAYRkcQJOV_eKFrQTe1j7vQoq6N6CZk5Tub6FbbJPr3GfQDunrgnMMzIh5lLAqWMaUT4lvbhGFI0wVNEuaRSy47VFZqdV5DZpA7hI16A8uBqwriHIobKrQgjO6AQyFV3SIcOl4jvKS7Q6aVM7pt-XXgJ1mhhlwK0X8_Z2QseGvR_-2UzHfbY7YPHkY_e7SubHRv1KBKFtE0SqJVyNh_1QfJp0Gs</recordid><startdate>20241107</startdate><enddate>20241107</enddate><creator>Sun, Wenqiang</creator><creator>Chen, Shuo</creator><creator>Liu, Fangfu</creator><creator>Chen, Zilong</creator><creator>Duan, Yueqi</creator><creator>Zhang, Jun</creator><creator>Wang, Yikai</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241107</creationdate><title>DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion</title><author>Sun, Wenqiang ; Chen, Shuo ; Liu, Fangfu ; Chen, Zilong ; Duan, Yueqi ; Zhang, Jun ; Wang, Yikai</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31261519033</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Controllability</topic><topic>Frames (data processing)</topic><topic>Scene generation</topic><topic>Spatiotemporal data</topic><topic>Synthetic data</topic><toplevel>online_resources</toplevel><creatorcontrib>Sun, Wenqiang</creatorcontrib><creatorcontrib>Chen, Shuo</creatorcontrib><creatorcontrib>Liu, Fangfu</creatorcontrib><creatorcontrib>Chen, Zilong</creatorcontrib><creatorcontrib>Duan, Yueqi</creatorcontrib><creatorcontrib>Zhang, Jun</creatorcontrib><creatorcontrib>Wang, Yikai</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sun, Wenqiang</au><au>Chen, Shuo</au><au>Liu, Fangfu</au><au>Chen, Zilong</au><au>Duan, Yueqi</au><au>Zhang, Jun</au><au>Wang, Yikai</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion</atitle><jtitle>arXiv.org</jtitle><date>2024-11-07</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In this paper, we introduce \textbf{DimensionX}, a framework designed to generate photorealistic 3D and 4D scenes from just a single image with video diffusion. Our approach begins with the insight that both the spatial structure of a 3D scene and the temporal evolution of a 4D scene can be effectively represented through sequences of video frames. While recent video diffusion models have shown remarkable success in producing vivid visuals, they face limitations in directly recovering 3D/4D scenes due to limited spatial and temporal controllability during generation. To overcome this, we propose ST-Director, which decouples spatial and temporal factors in video diffusion by learning dimension-aware LoRAs from dimension-variant data. This controllable video diffusion approach enables precise manipulation of spatial structure and temporal dynamics, allowing us to reconstruct both 3D and 4D representations from sequential frames with the combination of spatial and temporal dimensions. Additionally, to bridge the gap between generated videos and real-world scenes, we introduce a trajectory-aware mechanism for 3D generation and an identity-preserving denoising strategy for 4D generation. Extensive experiments on various real-world and synthetic datasets demonstrate that DimensionX achieves superior results in controllable video generation, as well as in 3D and 4D scene generation, compared with previous methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_3126151903
source Free E- Journals
subjects Controllability
Frames (data processing)
Scene generation
Spatiotemporal data
Synthetic data
title DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T15%3A59%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=DimensionX:%20Create%20Any%203D%20and%204D%20Scenes%20from%20a%20Single%20Image%20with%20Controllable%20Video%20Diffusion&rft.jtitle=arXiv.org&rft.au=Sun,%20Wenqiang&rft.date=2024-11-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3126151903%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3126151903&rft_id=info:pmid/&rfr_iscdi=true