SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D Human Reconstruction
Reconstructing 3D human shapes from 2D images has received increasing attention recently due to its fundamental support for many high-level 3D applications. Compared with natural images, freehand sketches are much more flexible to depict various shapes, providing a high potential and valuable way fo...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-10 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Wang, Fei Tang, Kongzhang Wu, Hefeng Zhao, Baoquan Cai, Hao Zhou, Teng |
description | Reconstructing 3D human shapes from 2D images has received increasing attention recently due to its fundamental support for many high-level 3D applications. Compared with natural images, freehand sketches are much more flexible to depict various shapes, providing a high potential and valuable way for 3D human reconstruction. However, such a task is highly challenging. The sparse abstract characteristics of sketches add severe difficulties, such as arbitrariness, inaccuracy, and lacking image details, to the already badly ill-posed problem of 2D-to-3D reconstruction. Although current methods have achieved great success in reconstructing 3D human bodies from a single-view image, they do not work well on freehand sketches. In this paper, we propose a novel sketch-driven multi-faceted decoder network termed SketchBodyNet to address this task. Specifically, the network consists of a backbone and three separate attention decoder branches, where a multi-head self-attention module is exploited in each decoder to obtain enhanced features, followed by a multi-layer perceptron. The multi-faceted decoders aim to predict the camera, shape, and pose parameters, respectively, which are then associated with the SMPL model to reconstruct the corresponding 3D human mesh. In learning, existing 3D meshes are projected via the camera parameters into 2D synthetic sketches with joints, which are combined with the freehand sketches to optimize the model. To verify our method, we collect a large-scale dataset of about 26k freehand sketches and their corresponding 3D meshes containing various poses of human bodies from 14 different angles. Extensive experimental results demonstrate our SketchBodyNet achieves superior performance in reconstructing 3D human meshes from freehand sketches. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2875641646</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2875641646</sourcerecordid><originalsourceid>FETCH-proquest_journals_28756416463</originalsourceid><addsrcrecordid>eNqNiksKwjAUAIMgWLR3eOA60Cb94U6t4kYX6sJdKekr9mOi-Sje3oIewNXAzIyIxzgPaRYxNiG-MW0QBCxJWRxzj1xOHVpxXanqfUC7gCV8Bc1180QJe9fbhtalQIsV5ChUhRqG9aV0B7XSwHPYuVsp4ThEaax2wjZKzsi4LnuD_o9TMt9uzusdvWv1cGhs0Sqn5ZAKlqVxEoVJlPD_rg8Yk0El</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2875641646</pqid></control><display><type>article</type><title>SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D Human Reconstruction</title><source>Free E- Journals</source><creator>Wang, Fei ; Tang, Kongzhang ; Wu, Hefeng ; Zhao, Baoquan ; Cai, Hao ; Zhou, Teng</creator><creatorcontrib>Wang, Fei ; Tang, Kongzhang ; Wu, Hefeng ; Zhao, Baoquan ; Cai, Hao ; Zhou, Teng</creatorcontrib><description>Reconstructing 3D human shapes from 2D images has received increasing attention recently due to its fundamental support for many high-level 3D applications. Compared with natural images, freehand sketches are much more flexible to depict various shapes, providing a high potential and valuable way for 3D human reconstruction. However, such a task is highly challenging. The sparse abstract characteristics of sketches add severe difficulties, such as arbitrariness, inaccuracy, and lacking image details, to the already badly ill-posed problem of 2D-to-3D reconstruction. Although current methods have achieved great success in reconstructing 3D human bodies from a single-view image, they do not work well on freehand sketches. In this paper, we propose a novel sketch-driven multi-faceted decoder network termed SketchBodyNet to address this task. Specifically, the network consists of a backbone and three separate attention decoder branches, where a multi-head self-attention module is exploited in each decoder to obtain enhanced features, followed by a multi-layer perceptron. The multi-faceted decoders aim to predict the camera, shape, and pose parameters, respectively, which are then associated with the SMPL model to reconstruct the corresponding 3D human mesh. In learning, existing 3D meshes are projected via the camera parameters into 2D synthetic sketches with joints, which are combined with the freehand sketches to optimize the model. To verify our method, we collect a large-scale dataset of about 26k freehand sketches and their corresponding 3D meshes containing various poses of human bodies from 14 different angles. Extensive experimental results demonstrate our SketchBodyNet achieves superior performance in reconstructing 3D human meshes from freehand sketches.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cameras ; Decoders ; Finite element method ; Ill posed problems ; Image reconstruction ; Mathematical models ; Multilayer perceptrons ; Multilayers ; Parameters ; Sketches</subject><ispartof>arXiv.org, 2023-10</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Wang, Fei</creatorcontrib><creatorcontrib>Tang, Kongzhang</creatorcontrib><creatorcontrib>Wu, Hefeng</creatorcontrib><creatorcontrib>Zhao, Baoquan</creatorcontrib><creatorcontrib>Cai, Hao</creatorcontrib><creatorcontrib>Zhou, Teng</creatorcontrib><title>SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D Human Reconstruction</title><title>arXiv.org</title><description>Reconstructing 3D human shapes from 2D images has received increasing attention recently due to its fundamental support for many high-level 3D applications. Compared with natural images, freehand sketches are much more flexible to depict various shapes, providing a high potential and valuable way for 3D human reconstruction. However, such a task is highly challenging. The sparse abstract characteristics of sketches add severe difficulties, such as arbitrariness, inaccuracy, and lacking image details, to the already badly ill-posed problem of 2D-to-3D reconstruction. Although current methods have achieved great success in reconstructing 3D human bodies from a single-view image, they do not work well on freehand sketches. In this paper, we propose a novel sketch-driven multi-faceted decoder network termed SketchBodyNet to address this task. Specifically, the network consists of a backbone and three separate attention decoder branches, where a multi-head self-attention module is exploited in each decoder to obtain enhanced features, followed by a multi-layer perceptron. The multi-faceted decoders aim to predict the camera, shape, and pose parameters, respectively, which are then associated with the SMPL model to reconstruct the corresponding 3D human mesh. In learning, existing 3D meshes are projected via the camera parameters into 2D synthetic sketches with joints, which are combined with the freehand sketches to optimize the model. To verify our method, we collect a large-scale dataset of about 26k freehand sketches and their corresponding 3D meshes containing various poses of human bodies from 14 different angles. Extensive experimental results demonstrate our SketchBodyNet achieves superior performance in reconstructing 3D human meshes from freehand sketches.</description><subject>Cameras</subject><subject>Decoders</subject><subject>Finite element method</subject><subject>Ill posed problems</subject><subject>Image reconstruction</subject><subject>Mathematical models</subject><subject>Multilayer perceptrons</subject><subject>Multilayers</subject><subject>Parameters</subject><subject>Sketches</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNiksKwjAUAIMgWLR3eOA60Cb94U6t4kYX6sJdKekr9mOi-Sje3oIewNXAzIyIxzgPaRYxNiG-MW0QBCxJWRxzj1xOHVpxXanqfUC7gCV8Bc1180QJe9fbhtalQIsV5ChUhRqG9aV0B7XSwHPYuVsp4ThEaax2wjZKzsi4LnuD_o9TMt9uzusdvWv1cGhs0Sqn5ZAKlqVxEoVJlPD_rg8Yk0El</recordid><startdate>20231010</startdate><enddate>20231010</enddate><creator>Wang, Fei</creator><creator>Tang, Kongzhang</creator><creator>Wu, Hefeng</creator><creator>Zhao, Baoquan</creator><creator>Cai, Hao</creator><creator>Zhou, Teng</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231010</creationdate><title>SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D Human Reconstruction</title><author>Wang, Fei ; Tang, Kongzhang ; Wu, Hefeng ; Zhao, Baoquan ; Cai, Hao ; Zhou, Teng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28756416463</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Cameras</topic><topic>Decoders</topic><topic>Finite element method</topic><topic>Ill posed problems</topic><topic>Image reconstruction</topic><topic>Mathematical models</topic><topic>Multilayer perceptrons</topic><topic>Multilayers</topic><topic>Parameters</topic><topic>Sketches</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Fei</creatorcontrib><creatorcontrib>Tang, Kongzhang</creatorcontrib><creatorcontrib>Wu, Hefeng</creatorcontrib><creatorcontrib>Zhao, Baoquan</creatorcontrib><creatorcontrib>Cai, Hao</creatorcontrib><creatorcontrib>Zhou, Teng</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Fei</au><au>Tang, Kongzhang</au><au>Wu, Hefeng</au><au>Zhao, Baoquan</au><au>Cai, Hao</au><au>Zhou, Teng</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D Human Reconstruction</atitle><jtitle>arXiv.org</jtitle><date>2023-10-10</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Reconstructing 3D human shapes from 2D images has received increasing attention recently due to its fundamental support for many high-level 3D applications. Compared with natural images, freehand sketches are much more flexible to depict various shapes, providing a high potential and valuable way for 3D human reconstruction. However, such a task is highly challenging. The sparse abstract characteristics of sketches add severe difficulties, such as arbitrariness, inaccuracy, and lacking image details, to the already badly ill-posed problem of 2D-to-3D reconstruction. Although current methods have achieved great success in reconstructing 3D human bodies from a single-view image, they do not work well on freehand sketches. In this paper, we propose a novel sketch-driven multi-faceted decoder network termed SketchBodyNet to address this task. Specifically, the network consists of a backbone and three separate attention decoder branches, where a multi-head self-attention module is exploited in each decoder to obtain enhanced features, followed by a multi-layer perceptron. The multi-faceted decoders aim to predict the camera, shape, and pose parameters, respectively, which are then associated with the SMPL model to reconstruct the corresponding 3D human mesh. In learning, existing 3D meshes are projected via the camera parameters into 2D synthetic sketches with joints, which are combined with the freehand sketches to optimize the model. To verify our method, we collect a large-scale dataset of about 26k freehand sketches and their corresponding 3D meshes containing various poses of human bodies from 14 different angles. Extensive experimental results demonstrate our SketchBodyNet achieves superior performance in reconstructing 3D human meshes from freehand sketches.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2875641646 |
source | Free E- Journals |
subjects | Cameras Decoders Finite element method Ill posed problems Image reconstruction Mathematical models Multilayer perceptrons Multilayers Parameters Sketches |
title | SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D Human Reconstruction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T08%3A12%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=SketchBodyNet:%20A%20Sketch-Driven%20Multi-faceted%20Decoder%20Network%20for%203D%20Human%20Reconstruction&rft.jtitle=arXiv.org&rft.au=Wang,%20Fei&rft.date=2023-10-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2875641646%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2875641646&rft_id=info:pmid/&rfr_iscdi=true |