ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image

Recent progress in human shape learning, shows that neural implicit models are effective in generating 3D human surfaces from limited number of views, and even from a single RGB image. However, existing monocular approaches still struggle to recover fine geometric details such as face, hands or clot...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-03
Hauptverfasser: Pesavento, Marco, Xu, Yuanlu, Sarafianos, Nikolaos, Maier, Robert, Wang, Ziyan, Chun-Han, Yao, Volino, Marco, Boyer, Edmond, Hilton, Adrian, Tung, Tony
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Pesavento, Marco
Xu, Yuanlu
Sarafianos, Nikolaos
Maier, Robert
Wang, Ziyan
Chun-Han, Yao
Volino, Marco
Boyer, Edmond
Hilton, Adrian
Tung, Tony
description Recent progress in human shape learning, shows that neural implicit models are effective in generating 3D human surfaces from limited number of views, and even from a single RGB image. However, existing monocular approaches still struggle to recover fine geometric details such as face, hands or cloth wrinkles. They are also easily prone to depth ambiguities that result in distorted geometries along the camera optical axis. In this paper, we explore the benefits of incorporating depth observations in the reconstruction process by introducing ANIM, a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy. Our model learns geometric details from both multi-resolution pixel-aligned and voxel-aligned features to leverage depth information and enable spatial relationships, mitigating depth ambiguities. We further enhance the quality of the reconstructed shape by introducing a depth-supervision strategy, which improves the accuracy of the signed distance field estimation of points that lie on the reconstructed surface. Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input. In addition, we introduce ANIM-Real, a new multi-modal dataset comprising high-quality scans paired with consumer-grade RGB-D camera, and our protocol to fine-tune ANIM, enabling high-quality reconstruction from real-world human capture.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2962943235</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2962943235</sourcerecordid><originalsourceid>FETCH-proquest_journals_29629432353</originalsourceid><addsrcrecordid>eNqNyrsOgjAYQOHGxESivMOfOJNgCyhueMUBBmLiSJpaSElpsZf3l8EHcPqGcxYowITsokOC8QqF1g5xHONsj9OUBOhV1I_qCAVj3lDHoeazEh7jJAUTDir95hI6baD0I1XQcKaVdcYzJ7SCzugRKFihesmhuZ-iC4iR9nyDlh2Vloc_12h7uz7PZTQZ_fHcunbQ3qg5tTjPcJ4QTFLy3_UFZEo_2A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2962943235</pqid></control><display><type>article</type><title>ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image</title><source>Free E- Journals</source><creator>Pesavento, Marco ; Xu, Yuanlu ; Sarafianos, Nikolaos ; Maier, Robert ; Wang, Ziyan ; Chun-Han, Yao ; Volino, Marco ; Boyer, Edmond ; Hilton, Adrian ; Tung, Tony</creator><creatorcontrib>Pesavento, Marco ; Xu, Yuanlu ; Sarafianos, Nikolaos ; Maier, Robert ; Wang, Ziyan ; Chun-Han, Yao ; Volino, Marco ; Boyer, Edmond ; Hilton, Adrian ; Tung, Tony</creatorcontrib><description>Recent progress in human shape learning, shows that neural implicit models are effective in generating 3D human surfaces from limited number of views, and even from a single RGB image. However, existing monocular approaches still struggle to recover fine geometric details such as face, hands or cloth wrinkles. They are also easily prone to depth ambiguities that result in distorted geometries along the camera optical axis. In this paper, we explore the benefits of incorporating depth observations in the reconstruction process by introducing ANIM, a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy. Our model learns geometric details from both multi-resolution pixel-aligned and voxel-aligned features to leverage depth information and enable spatial relationships, mitigating depth ambiguities. We further enhance the quality of the reconstructed shape by introducing a depth-supervision strategy, which improves the accuracy of the signed distance field estimation of points that lie on the reconstructed surface. Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input. In addition, we introduce ANIM-Real, a new multi-modal dataset comprising high-quality scans paired with consumer-grade RGB-D camera, and our protocol to fine-tune ANIM, enabling high-quality reconstruction from real-world human capture.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cameras ; Geometric accuracy ; Image reconstruction ; Model accuracy</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Pesavento, Marco</creatorcontrib><creatorcontrib>Xu, Yuanlu</creatorcontrib><creatorcontrib>Sarafianos, Nikolaos</creatorcontrib><creatorcontrib>Maier, Robert</creatorcontrib><creatorcontrib>Wang, Ziyan</creatorcontrib><creatorcontrib>Chun-Han, Yao</creatorcontrib><creatorcontrib>Volino, Marco</creatorcontrib><creatorcontrib>Boyer, Edmond</creatorcontrib><creatorcontrib>Hilton, Adrian</creatorcontrib><creatorcontrib>Tung, Tony</creatorcontrib><title>ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image</title><title>arXiv.org</title><description>Recent progress in human shape learning, shows that neural implicit models are effective in generating 3D human surfaces from limited number of views, and even from a single RGB image. However, existing monocular approaches still struggle to recover fine geometric details such as face, hands or cloth wrinkles. They are also easily prone to depth ambiguities that result in distorted geometries along the camera optical axis. In this paper, we explore the benefits of incorporating depth observations in the reconstruction process by introducing ANIM, a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy. Our model learns geometric details from both multi-resolution pixel-aligned and voxel-aligned features to leverage depth information and enable spatial relationships, mitigating depth ambiguities. We further enhance the quality of the reconstructed shape by introducing a depth-supervision strategy, which improves the accuracy of the signed distance field estimation of points that lie on the reconstructed surface. Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input. In addition, we introduce ANIM-Real, a new multi-modal dataset comprising high-quality scans paired with consumer-grade RGB-D camera, and our protocol to fine-tune ANIM, enabling high-quality reconstruction from real-world human capture.</description><subject>Cameras</subject><subject>Geometric accuracy</subject><subject>Image reconstruction</subject><subject>Model accuracy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyrsOgjAYQOHGxESivMOfOJNgCyhueMUBBmLiSJpaSElpsZf3l8EHcPqGcxYowITsokOC8QqF1g5xHONsj9OUBOhV1I_qCAVj3lDHoeazEh7jJAUTDir95hI6baD0I1XQcKaVdcYzJ7SCzugRKFihesmhuZ-iC4iR9nyDlh2Vloc_12h7uz7PZTQZ_fHcunbQ3qg5tTjPcJ4QTFLy3_UFZEo_2A</recordid><startdate>20240318</startdate><enddate>20240318</enddate><creator>Pesavento, Marco</creator><creator>Xu, Yuanlu</creator><creator>Sarafianos, Nikolaos</creator><creator>Maier, Robert</creator><creator>Wang, Ziyan</creator><creator>Chun-Han, Yao</creator><creator>Volino, Marco</creator><creator>Boyer, Edmond</creator><creator>Hilton, Adrian</creator><creator>Tung, Tony</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240318</creationdate><title>ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image</title><author>Pesavento, Marco ; Xu, Yuanlu ; Sarafianos, Nikolaos ; Maier, Robert ; Wang, Ziyan ; Chun-Han, Yao ; Volino, Marco ; Boyer, Edmond ; Hilton, Adrian ; Tung, Tony</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29629432353</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Cameras</topic><topic>Geometric accuracy</topic><topic>Image reconstruction</topic><topic>Model accuracy</topic><toplevel>online_resources</toplevel><creatorcontrib>Pesavento, Marco</creatorcontrib><creatorcontrib>Xu, Yuanlu</creatorcontrib><creatorcontrib>Sarafianos, Nikolaos</creatorcontrib><creatorcontrib>Maier, Robert</creatorcontrib><creatorcontrib>Wang, Ziyan</creatorcontrib><creatorcontrib>Chun-Han, Yao</creatorcontrib><creatorcontrib>Volino, Marco</creatorcontrib><creatorcontrib>Boyer, Edmond</creatorcontrib><creatorcontrib>Hilton, Adrian</creatorcontrib><creatorcontrib>Tung, Tony</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pesavento, Marco</au><au>Xu, Yuanlu</au><au>Sarafianos, Nikolaos</au><au>Maier, Robert</au><au>Wang, Ziyan</au><au>Chun-Han, Yao</au><au>Volino, Marco</au><au>Boyer, Edmond</au><au>Hilton, Adrian</au><au>Tung, Tony</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image</atitle><jtitle>arXiv.org</jtitle><date>2024-03-18</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Recent progress in human shape learning, shows that neural implicit models are effective in generating 3D human surfaces from limited number of views, and even from a single RGB image. However, existing monocular approaches still struggle to recover fine geometric details such as face, hands or cloth wrinkles. They are also easily prone to depth ambiguities that result in distorted geometries along the camera optical axis. In this paper, we explore the benefits of incorporating depth observations in the reconstruction process by introducing ANIM, a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy. Our model learns geometric details from both multi-resolution pixel-aligned and voxel-aligned features to leverage depth information and enable spatial relationships, mitigating depth ambiguities. We further enhance the quality of the reconstructed shape by introducing a depth-supervision strategy, which improves the accuracy of the signed distance field estimation of points that lie on the reconstructed surface. Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input. In addition, we introduce ANIM-Real, a new multi-modal dataset comprising high-quality scans paired with consumer-grade RGB-D camera, and our protocol to fine-tune ANIM, enabling high-quality reconstruction from real-world human capture.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2962943235
source Free E- Journals
subjects Cameras
Geometric accuracy
Image reconstruction
Model accuracy
title ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T19%3A40%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=ANIM:%20Accurate%20Neural%20Implicit%20Model%20for%20Human%20Reconstruction%20from%20a%20single%20RGB-D%20image&rft.jtitle=arXiv.org&rft.au=Pesavento,%20Marco&rft.date=2024-03-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2962943235%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2962943235&rft_id=info:pmid/&rfr_iscdi=true