Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction

Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces. However, they can only produce static surfaces that are not controllable, which provides limited ability to modify the resulting model by editing its pose or shape parameters. Nevertheless, su...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bhatnagar, Bharat Lal, Sminchisescu, Cristian, Theobalt, Christian, Pons-Moll, Gerard
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Bhatnagar, Bharat Lal
Sminchisescu, Cristian
Theobalt, Christian
Pons-Moll, Gerard
description Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces. However, they can only produce static surfaces that are not controllable, which provides limited ability to modify the resulting model by editing its pose or shape parameters. Nevertheless, such features are essential in building flexible models for both computer graphics and computer vision. In this work, we present methodology that combines detail-rich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing. Given sparse 3D point clouds sampled on the surface of a dressed person, we use an Implicit Part Network (IP-Net)to jointly predict the outer 3D surface of the dressed person, the and inner body surface, and the semantic correspondences to a parametric body model. We subsequently use correspondences to fit the body model to our inner surface and then non-rigidly deform it (under a parametric body + displacement model) to the outer surface in order to capture garment, face and hair detail. In quantitative and qualitative experiments with both full body data and hand scans we show that the proposed methodology generalizes, and is effective even given incomplete point clouds collected from single-view depth images. Our models and code can be downloaded from http://virtualhumans.mpi-inf.mpg.de/ipnet.
doi_str_mv 10.48550/arxiv.2007.11432
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2007_11432</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2007_11432</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-508245f320a39afb732e853e1e1e7975d95a3871f850b6a7399c8be74483d3c63</originalsourceid><addsrcrecordid>eNotj8FKxDAURbNxIaMf4Mr8QGua1zTJUqrjDHRQZJhteU2TIdCkQ9qK_r1a5S7u4nIuHELuCpaXSgj2gOnTf-ScMZkXRQn8mpzqMXQ--nim-3AZvPEz3S7RzH6MtLGY1gljT98wYbBz8oYext4OE3VjovBEd0vASN-tGeM0p2VFb8iVw2Gyt_-9Icft87HeZc3ry75-bDKsJM8EU7wUDjhD0Og6CdwqAbb4idRS9FogKFk4JVhXoQStjeqsLEsFPZgKNuT-73YVay_JB0xf7a9guwrCN-7_SrE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction</title><source>arXiv.org</source><creator>Bhatnagar, Bharat Lal ; Sminchisescu, Cristian ; Theobalt, Christian ; Pons-Moll, Gerard</creator><creatorcontrib>Bhatnagar, Bharat Lal ; Sminchisescu, Cristian ; Theobalt, Christian ; Pons-Moll, Gerard</creatorcontrib><description>Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces. However, they can only produce static surfaces that are not controllable, which provides limited ability to modify the resulting model by editing its pose or shape parameters. Nevertheless, such features are essential in building flexible models for both computer graphics and computer vision. In this work, we present methodology that combines detail-rich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing. Given sparse 3D point clouds sampled on the surface of a dressed person, we use an Implicit Part Network (IP-Net)to jointly predict the outer 3D surface of the dressed person, the and inner body surface, and the semantic correspondences to a parametric body model. We subsequently use correspondences to fit the body model to our inner surface and then non-rigidly deform it (under a parametric body + displacement model) to the outer surface in order to capture garment, face and hair detail. In quantitative and qualitative experiments with both full body data and hand scans we show that the proposed methodology generalizes, and is effective even given incomplete point clouds collected from single-view depth images. Our models and code can be downloaded from http://virtualhumans.mpi-inf.mpg.de/ipnet.</description><identifier>DOI: 10.48550/arxiv.2007.11432</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2020-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2007.11432$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2007.11432$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bhatnagar, Bharat Lal</creatorcontrib><creatorcontrib>Sminchisescu, Cristian</creatorcontrib><creatorcontrib>Theobalt, Christian</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><title>Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction</title><description>Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces. However, they can only produce static surfaces that are not controllable, which provides limited ability to modify the resulting model by editing its pose or shape parameters. Nevertheless, such features are essential in building flexible models for both computer graphics and computer vision. In this work, we present methodology that combines detail-rich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing. Given sparse 3D point clouds sampled on the surface of a dressed person, we use an Implicit Part Network (IP-Net)to jointly predict the outer 3D surface of the dressed person, the and inner body surface, and the semantic correspondences to a parametric body model. We subsequently use correspondences to fit the body model to our inner surface and then non-rigidly deform it (under a parametric body + displacement model) to the outer surface in order to capture garment, face and hair detail. In quantitative and qualitative experiments with both full body data and hand scans we show that the proposed methodology generalizes, and is effective even given incomplete point clouds collected from single-view depth images. Our models and code can be downloaded from http://virtualhumans.mpi-inf.mpg.de/ipnet.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FKxDAURbNxIaMf4Mr8QGua1zTJUqrjDHRQZJhteU2TIdCkQ9qK_r1a5S7u4nIuHELuCpaXSgj2gOnTf-ScMZkXRQn8mpzqMXQ--nim-3AZvPEz3S7RzH6MtLGY1gljT98wYbBz8oYext4OE3VjovBEd0vASN-tGeM0p2VFb8iVw2Gyt_-9Icft87HeZc3ry75-bDKsJM8EU7wUDjhD0Og6CdwqAbb4idRS9FogKFk4JVhXoQStjeqsLEsFPZgKNuT-73YVay_JB0xf7a9guwrCN-7_SrE</recordid><startdate>20200722</startdate><enddate>20200722</enddate><creator>Bhatnagar, Bharat Lal</creator><creator>Sminchisescu, Cristian</creator><creator>Theobalt, Christian</creator><creator>Pons-Moll, Gerard</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200722</creationdate><title>Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction</title><author>Bhatnagar, Bharat Lal ; Sminchisescu, Cristian ; Theobalt, Christian ; Pons-Moll, Gerard</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-508245f320a39afb732e853e1e1e7975d95a3871f850b6a7399c8be74483d3c63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Bhatnagar, Bharat Lal</creatorcontrib><creatorcontrib>Sminchisescu, Cristian</creatorcontrib><creatorcontrib>Theobalt, Christian</creatorcontrib><creatorcontrib>Pons-Moll, Gerard</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bhatnagar, Bharat Lal</au><au>Sminchisescu, Cristian</au><au>Theobalt, Christian</au><au>Pons-Moll, Gerard</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction</atitle><date>2020-07-22</date><risdate>2020</risdate><abstract>Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces. However, they can only produce static surfaces that are not controllable, which provides limited ability to modify the resulting model by editing its pose or shape parameters. Nevertheless, such features are essential in building flexible models for both computer graphics and computer vision. In this work, we present methodology that combines detail-rich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing. Given sparse 3D point clouds sampled on the surface of a dressed person, we use an Implicit Part Network (IP-Net)to jointly predict the outer 3D surface of the dressed person, the and inner body surface, and the semantic correspondences to a parametric body model. We subsequently use correspondences to fit the body model to our inner surface and then non-rigidly deform it (under a parametric body + displacement model) to the outer surface in order to capture garment, face and hair detail. In quantitative and qualitative experiments with both full body data and hand scans we show that the proposed methodology generalizes, and is effective even given incomplete point clouds collected from single-view depth images. Our models and code can be downloaded from http://virtualhumans.mpi-inf.mpg.de/ipnet.</abstract><doi>10.48550/arxiv.2007.11432</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2007.11432
ispartof
issn
language eng
recordid cdi_arxiv_primary_2007_11432
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T03%3A52%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Combining%20Implicit%20Function%20Learning%20and%20Parametric%20Models%20for%203D%20Human%20Reconstruction&rft.au=Bhatnagar,%20Bharat%20Lal&rft.date=2020-07-22&rft_id=info:doi/10.48550/arxiv.2007.11432&rft_dat=%3Carxiv_GOX%3E2007_11432%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true