Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose Estimation
The advances in monocular 3D human pose estimation are dominated by supervised techniques that require large-scale 2D/3D pose annotations. Such methods often behave erratically in the absence of any provision to discard unfamiliar out-of-distribution data. To this end, we cast the 3D human pose lear...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The advances in monocular 3D human pose estimation are dominated by
supervised techniques that require large-scale 2D/3D pose annotations. Such
methods often behave erratically in the absence of any provision to discard
unfamiliar out-of-distribution data. To this end, we cast the 3D human pose
learning as an unsupervised domain adaptation problem. We introduce MRP-Net
that constitutes a common deep network backbone with two output heads
subscribing to two diverse configurations; a) model-free joint localization and
b) model-based parametric regression. Such a design allows us to derive
suitable measures to quantify prediction uncertainty at both pose and joint
level granularity. While supervising only on labeled synthetic samples, the
adaptation process aims to minimize the uncertainty for the unlabeled target
images while maximizing the same for an extreme out-of-distribution dataset
(backgrounds). Alongside synthetic-to-real 3D pose adaptation, the
joint-uncertainties allow expanding the adaptation to work on in-the-wild
images even in the presence of occlusion and truncation scenarios. We present a
comprehensive evaluation of the proposed approach and demonstrate
state-of-the-art performance on benchmark datasets. |
---|---|
DOI: | 10.48550/arxiv.2203.15293 |