One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization

Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsist...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-06
Hauptverfasser: Liu, Minghua, Xu, Chao, Jin, Haian, Chen, Linghao, Mukund Varma T, Xu, Zexiang, Su, Hao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Minghua
Xu, Chao
Jin, Haian
Chen, Linghao
Mukund Varma T
Xu, Zexiang
Su, Hao
description Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsistency results, and poor geometry. In this work, we propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass. Given a single image, we first use a view-conditioned 2D diffusion model, Zero123, to generate multi-view images for the input view, and then aim to lift them up to 3D space. Since traditional reconstruction methods struggle with inconsistent multi-view predictions, we build our 3D reconstruction module upon an SDF-based generalizable neural surface reconstruction method and propose several critical training strategies to enable the reconstruction of 360-degree meshes. Without costly optimizations, our method reconstructs 3D shapes in significantly less time than existing methods. Moreover, our method favors better geometry, generates more 3D consistent results, and adheres more closely to the input image. We evaluate our approach on both synthetic data and in-the-wild images and demonstrate its superiority in terms of both mesh quality and runtime. In addition, our approach can seamlessly support the text-to-3D task by integrating with off-the-shelf text-to-image diffusion models.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2831656067</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2831656067</sourcerecordid><originalsourceid>FETCH-proquest_journals_28316560673</originalsourceid><addsrcrecordid>eNqNyk8LgjAYgPERBEn5HV7oPNDNTekW_aEOUaB3GfWmE93MTaI-fR36AJ2ew--ZkIBxHtMsYWxGQueaKIqYTJkQPCDF2SBllNNErGBtXpBrU7UIx05VCN4C38IJXQ3aQCIgx6s1NwdP7Ws7erjgQPNa9Qjn3utOv5XX1izI9K5ah-Gvc7Lc74rNgfaDfYzofNnYcTBfKlnGYylkJFP-3_UBav09JQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2831656067</pqid></control><display><type>article</type><title>One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization</title><source>Freely Accessible Journals</source><creator>Liu, Minghua ; Xu, Chao ; Jin, Haian ; Chen, Linghao ; Mukund Varma T ; Xu, Zexiang ; Su, Hao</creator><creatorcontrib>Liu, Minghua ; Xu, Chao ; Jin, Haian ; Chen, Linghao ; Mukund Varma T ; Xu, Zexiang ; Su, Hao</creatorcontrib><description>Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsistency results, and poor geometry. In this work, we propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass. Given a single image, we first use a view-conditioned 2D diffusion model, Zero123, to generate multi-view images for the input view, and then aim to lift them up to 3D space. Since traditional reconstruction methods struggle with inconsistent multi-view predictions, we build our 3D reconstruction module upon an SDF-based generalizable neural surface reconstruction method and propose several critical training strategies to enable the reconstruction of 360-degree meshes. Without costly optimizations, our method reconstructs 3D shapes in significantly less time than existing methods. Moreover, our method favors better geometry, generates more 3D consistent results, and adheres more closely to the input image. We evaluate our approach on both synthetic data and in-the-wild images and demonstrate its superiority in terms of both mesh quality and runtime. In addition, our approach can seamlessly support the text-to-3D task by integrating with off-the-shelf text-to-image diffusion models.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Finite element method ; Image reconstruction ; Shape optimization ; Synthetic data ; Two dimensional models</subject><ispartof>arXiv.org, 2023-06</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Liu, Minghua</creatorcontrib><creatorcontrib>Xu, Chao</creatorcontrib><creatorcontrib>Jin, Haian</creatorcontrib><creatorcontrib>Chen, Linghao</creatorcontrib><creatorcontrib>Mukund Varma T</creatorcontrib><creatorcontrib>Xu, Zexiang</creatorcontrib><creatorcontrib>Su, Hao</creatorcontrib><title>One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization</title><title>arXiv.org</title><description>Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsistency results, and poor geometry. In this work, we propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass. Given a single image, we first use a view-conditioned 2D diffusion model, Zero123, to generate multi-view images for the input view, and then aim to lift them up to 3D space. Since traditional reconstruction methods struggle with inconsistent multi-view predictions, we build our 3D reconstruction module upon an SDF-based generalizable neural surface reconstruction method and propose several critical training strategies to enable the reconstruction of 360-degree meshes. Without costly optimizations, our method reconstructs 3D shapes in significantly less time than existing methods. Moreover, our method favors better geometry, generates more 3D consistent results, and adheres more closely to the input image. We evaluate our approach on both synthetic data and in-the-wild images and demonstrate its superiority in terms of both mesh quality and runtime. In addition, our approach can seamlessly support the text-to-3D task by integrating with off-the-shelf text-to-image diffusion models.</description><subject>Finite element method</subject><subject>Image reconstruction</subject><subject>Shape optimization</subject><subject>Synthetic data</subject><subject>Two dimensional models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyk8LgjAYgPERBEn5HV7oPNDNTekW_aEOUaB3GfWmE93MTaI-fR36AJ2ew--ZkIBxHtMsYWxGQueaKIqYTJkQPCDF2SBllNNErGBtXpBrU7UIx05VCN4C38IJXQ3aQCIgx6s1NwdP7Ws7erjgQPNa9Qjn3utOv5XX1izI9K5ah-Gvc7Lc74rNgfaDfYzofNnYcTBfKlnGYylkJFP-3_UBav09JQ</recordid><startdate>20230629</startdate><enddate>20230629</enddate><creator>Liu, Minghua</creator><creator>Xu, Chao</creator><creator>Jin, Haian</creator><creator>Chen, Linghao</creator><creator>Mukund Varma T</creator><creator>Xu, Zexiang</creator><creator>Su, Hao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230629</creationdate><title>One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization</title><author>Liu, Minghua ; Xu, Chao ; Jin, Haian ; Chen, Linghao ; Mukund Varma T ; Xu, Zexiang ; Su, Hao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28316560673</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Finite element method</topic><topic>Image reconstruction</topic><topic>Shape optimization</topic><topic>Synthetic data</topic><topic>Two dimensional models</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Minghua</creatorcontrib><creatorcontrib>Xu, Chao</creatorcontrib><creatorcontrib>Jin, Haian</creatorcontrib><creatorcontrib>Chen, Linghao</creatorcontrib><creatorcontrib>Mukund Varma T</creatorcontrib><creatorcontrib>Xu, Zexiang</creatorcontrib><creatorcontrib>Su, Hao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Minghua</au><au>Xu, Chao</au><au>Jin, Haian</au><au>Chen, Linghao</au><au>Mukund Varma T</au><au>Xu, Zexiang</au><au>Su, Hao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization</atitle><jtitle>arXiv.org</jtitle><date>2023-06-29</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsistency results, and poor geometry. In this work, we propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass. Given a single image, we first use a view-conditioned 2D diffusion model, Zero123, to generate multi-view images for the input view, and then aim to lift them up to 3D space. Since traditional reconstruction methods struggle with inconsistent multi-view predictions, we build our 3D reconstruction module upon an SDF-based generalizable neural surface reconstruction method and propose several critical training strategies to enable the reconstruction of 360-degree meshes. Without costly optimizations, our method reconstructs 3D shapes in significantly less time than existing methods. Moreover, our method favors better geometry, generates more 3D consistent results, and adheres more closely to the input image. We evaluate our approach on both synthetic data and in-the-wild images and demonstrate its superiority in terms of both mesh quality and runtime. In addition, our approach can seamlessly support the text-to-3D task by integrating with off-the-shelf text-to-image diffusion models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_2831656067
source Freely Accessible Journals
subjects Finite element method
Image reconstruction
Shape optimization
Synthetic data
Two dimensional models
title One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T06%3A17%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=One-2-3-45:%20Any%20Single%20Image%20to%203D%20Mesh%20in%2045%20Seconds%20without%20Per-Shape%20Optimization&rft.jtitle=arXiv.org&rft.au=Liu,%20Minghua&rft.date=2023-06-29&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2831656067%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2831656067&rft_id=info:pmid/&rfr_iscdi=true