REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos
Reconstructing dynamic 3D garment surfaces with open boundaries from monocular videos is an important problem as it provides a practical and low-cost solution for clothes digitization. Recent neural rendering methods achieve high-quality dynamic clothed human reconstruction results from monocular vi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reconstructing dynamic 3D garment surfaces with open boundaries from
monocular videos is an important problem as it provides a practical and
low-cost solution for clothes digitization. Recent neural rendering methods
achieve high-quality dynamic clothed human reconstruction results from
monocular video, but these methods cannot separate the garment surface from the
body. Moreover, despite existing garment reconstruction methods based on
feature curve representation demonstrating impressive results for garment
reconstruction from a single image, they struggle to generate temporally
consistent surfaces for the video input. To address the above limitations, in
this paper, we formulate this task as an optimization problem of 3D garment
feature curves and surface reconstruction from monocular video. We introduce a
novel approach, called REC-MV, to jointly optimize the explicit feature curves
and the implicit signed distance field (SDF) of the garments. Then the open
garment meshes can be extracted via garment template registration in the
canonical space. Experiments on multiple casually captured datasets show that
our approach outperforms existing methods and can produce high-quality dynamic
garment surfaces. The source code is available at
https://github.com/GAP-LAB-CUHK-SZ/REC-MV. |
---|---|
DOI: | 10.48550/arxiv.2305.14236 |