COSMU: Complete 3D human shape from monocular unconstrained images
We present a novel framework to reconstruct complete 3D human shapes from a given target image by leveraging monocular unconstrained images. The objective of this work is to reproduce high-quality details in regions of the reconstructed human body that are not visible in the input target. The propos...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a novel framework to reconstruct complete 3D human shapes from a
given target image by leveraging monocular unconstrained images. The objective
of this work is to reproduce high-quality details in regions of the
reconstructed human body that are not visible in the input target. The proposed
methodology addresses the limitations of existing approaches for reconstructing
3D human shapes from a single image, which cannot reproduce shape details in
occluded body regions. The missing information of the monocular input can be
recovered by using multiple views captured from multiple cameras. However,
multi-view reconstruction methods necessitate accurately calibrated and
registered images, which can be challenging to obtain in real-world scenarios.
Given a target RGB image and a collection of multiple uncalibrated and
unregistered images of the same individual, acquired using a single camera, we
propose a novel framework to generate complete 3D human shapes. We introduce a
novel module to generate 2D multi-view normal maps of the person registered
with the target input image. The module consists of body part-based reference
selection and body part-based registration. The generated 2D normal maps are
then processed by a multi-view attention-based neural implicit model that
estimates an implicit representation of the 3D shape, ensuring the reproduction
of details in both observed and occluded regions. Extensive experiments
demonstrate that the proposed approach estimates higher quality details in the
non-visible regions of the 3D clothed human shapes compared to related methods,
without using parametric models. |
---|---|
DOI: | 10.48550/arxiv.2407.10586 |