Fast Online Video Pose Estimation by Dynamic Bayesian Modeling of Mode Transitions

We propose a fast online video pose estimation method to detect and track human upper-body poses based on a conditional dynamic Bayesian modeling of pose modes without referring to future frames. The estimation of human body poses from videos is an important task with many applications. Our method e...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cybernetics 2021-01, Vol.51 (1), p.2-15
Hauptverfasser: Chang, Ming-Ching, Ke, Lipeng, Qi, Honggang, Wen, Longyin, Lyu, Siwei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We propose a fast online video pose estimation method to detect and track human upper-body poses based on a conditional dynamic Bayesian modeling of pose modes without referring to future frames. The estimation of human body poses from videos is an important task with many applications. Our method extends fast image-based pose estimation to live video streams by leveraging the temporal correlation of articulated poses between frames. Video pose estimation is inferred over a time window using a conditional dynamic Bayesian network (CDBN), which we term time-windowed CDBN. Specifically, latent pose modes and their transitions are modeled and co-determined from the combination of three modules: 1) inference based on current observations; 2) the modeling of mode-to-mode transitions as a probabilistic prior; and 3) the modeling of state-to-mode transitions using a multimode softmax regression. Given the predicted pose modes, the body poses in terms of arm joint locations can then be determined more accurately and robustly. Our method is suitable to investigate high frame rate (HFR) scenarios, where pose mode transitions can effectively capture action-related temporal information to boost performance. We evaluate our method on a newly collected HFR-Pose dataset and four major video pose datasets (VideoPose2, TUM Kitchen, FLIC, and Penn_Action). Our method achieves improvements in both accuracy and efficiency over existing online video pose estimation methods.
ISSN:2168-2267
2168-2275
DOI:10.1109/TCYB.2018.2884216