Dance In the Wild: Monocular Human Animation with Neural Dynamic Appearance Synthesis
Synthesizing dynamic appearances of humans in motion plays a central role in applications such as AR/VR and video editing. While many recent methods have been proposed to tackle this problem, handling loose garments with complex textures and high dynamic motion still remains challenging. In this pap...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Synthesizing dynamic appearances of humans in motion plays a central role in
applications such as AR/VR and video editing. While many recent methods have
been proposed to tackle this problem, handling loose garments with complex
textures and high dynamic motion still remains challenging. In this paper, we
propose a video based appearance synthesis method that tackles such challenges
and demonstrates high quality results for in-the-wild videos that have not been
shown before. Specifically, we adopt a StyleGAN based architecture to the task
of person specific video based motion retargeting. We introduce a novel motion
signature that is used to modulate the generator weights to capture dynamic
appearance changes as well as regularizing the single frame based pose
estimates to improve temporal coherency. We evaluate our method on a set of
challenging videos and show that our approach achieves state-of-the art
performance both qualitatively and quantitatively. |
---|---|
DOI: | 10.48550/arxiv.2111.05916 |