Physics-based Human Motion Estimation and Synthesis from Videos
Human motion synthesis is an important problem with applications in graphics, gaming and simulation environments for robotics. Existing methods require accurate motion capture data for training, which is costly to obtain. Instead, we propose a framework for training generative models of physically p...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Human motion synthesis is an important problem with applications in graphics,
gaming and simulation environments for robotics. Existing methods require
accurate motion capture data for training, which is costly to obtain. Instead,
we propose a framework for training generative models of physically plausible
human motion directly from monocular RGB videos, which are much more widely
available. At the core of our method is a novel optimization formulation that
corrects imperfect image-based pose estimations by enforcing physics
constraints and reasons about contacts in a differentiable way. This
optimization yields corrected 3D poses and motions, as well as their
corresponding contact forces. Results show that our physically-corrected
motions significantly outperform prior work on pose estimation. We can then use
these to train a generative model to synthesize future motion. We demonstrate
both qualitatively and quantitatively improved motion estimation, synthesis
quality and physical plausibility achieved by our method on the Human3.6m
dataset~\cite{h36m_pami} as compared to prior kinematic and physics-based
methods. By enabling learning of motion synthesis from video, our method paves
the way for large-scale, realistic and diverse motion synthesis. Project page:
\url{https://nv-tlabs.github.io/publication/iccv_2021_physics/} |
---|---|
DOI: | 10.48550/arxiv.2109.09913 |