Combining detection and tracking for human pose estimation in videos
We propose a novel top-down approach that tackles the problem of multi-person human pose estimation and tracking in videos. In contrast to existing top-down approaches, our method is not limited by the performance of its person detector and can predict the poses of person instances not localized. It...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a novel top-down approach that tackles the problem of multi-person
human pose estimation and tracking in videos. In contrast to existing top-down
approaches, our method is not limited by the performance of its person detector
and can predict the poses of person instances not localized. It achieves this
capability by propagating known person locations forward and backward in time
and searching for poses in those regions. Our approach consists of three
components: (i) a Clip Tracking Network that performs body joint detection and
tracking simultaneously on small video clips; (ii) a Video Tracking Pipeline
that merges the fixed-length tracklets produced by the Clip Tracking Network to
arbitrary length tracks; and (iii) a Spatial-Temporal Merging procedure that
refines the joint locations based on spatial and temporal smoothing terms.
Thanks to the precision of our Clip Tracking Network and our merging procedure,
our approach produces very accurate joint predictions and can fix common
mistakes on hard scenarios like heavily entangled people. Our approach achieves
state-of-the-art results on both joint detection and tracking, on both the
PoseTrack 2017 and 2018 datasets, and against all top-down and bottom-down
approaches. |
---|---|
DOI: | 10.48550/arxiv.2003.13743 |