Video-Based Human Walking Estimation Using Joint Gait and Pose Manifolds
We study two fundamental issues about video-based human walking estimation, where the goal is to estimate 3D gait kinematics (i.e., joint positions) from 2D gait appearances (i.e., silhouettes). One is how to model the gait kinematics from different walking styles, and the other is how to represent...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2017-07, Vol.27 (7), p.1540-1554 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study two fundamental issues about video-based human walking estimation, where the goal is to estimate 3D gait kinematics (i.e., joint positions) from 2D gait appearances (i.e., silhouettes). One is how to model the gait kinematics from different walking styles, and the other is how to represent the gait appearances captured under different views and from individuals of distinct walking styles and body shapes. Our research is conducted in three steps. First, we propose the idea of joint gait-pose manifold (JGPM), which represents gait kinematics by coupling two nonlinear variables, pose (a specific walking stage) and gait (a particular walking style) in a unified latent space. We extend the Gaussian process latent variable model (GPLVM) for JGPM learning, where two heuristic topological priors, a torus and a cylinder, are considered and several JGPMs of different degrees of freedom (DoFs) are introduced for comparative analysis. Second, we develop a validation technique and a series of benchmark tests to evaluate multiple JGPMs and recent GPLVMs in terms of their performance for gait motion modeling. It is shown that the toroidal prior is slightly better than the cylindrical one, and the JGPM of 4 DoFs that balances the toroidal prior with the intrinsic data structure achieves the best performance. Third, a JGPM-based visual gait generative model (JGPM-VGGM) is developed, where JGPM plays a central role to bridge the gap between the gait appearances and the gait kinematics. Our proposed JGPM-VGGM is learned from Carnegie Mellon University MoCap data and tested on the HumanEva-I and HumanEva-II data sets. Our experimental results demonstrate the effectiveness and competitiveness of our algorithms compared with existing algorithms. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2016.2527218 |