Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On
We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on that effectively addresses garment-body collisions. In contrast to existing methods that require an undesirable postprocessing step to fix garment-body...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a new generative model for 3D garment deformations that enables us
to learn, for the first time, a data-driven method for virtual try-on that
effectively addresses garment-body collisions. In contrast to existing methods
that require an undesirable postprocessing step to fix garment-body
interpenetrations at test time, our approach directly outputs 3D garment
configurations that do not collide with the underlying body. Key to our success
is a new canonical space for garments that removes pose-and-shape deformations
already captured by a new diffused human body model, which extrapolates body
surface properties such as skinning weights and blendshapes to any 3D point. We
leverage this representation to train a generative model with a novel
self-supervised collision term that learns to reliably solve garment-body
interpenetrations. We extensively evaluate and compare our results with
recently proposed data-driven methods, and show that our method is the first to
successfully address garment-body contact in unseen body shapes and motions,
without compromising realism and detail. |
---|---|
DOI: | 10.48550/arxiv.2105.06462 |