Dense depth alignment for human pose and shape estimation

Estimating 3D human pose and shape (HPS) from a monocular image has many applications. However, collecting ground-truth data for this problem is costly and constrained to limited lab environments. Researchers have used priors based on body structure or kinematics, cues obtained from other vision tas...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal, image and video processing image and video processing, 2024-12, Vol.18 (12), p.8577-8584
Hauptverfasser: Karagoz, Batuhan, Suat, Ozhan, Uguz, Bedirhan, Akbas, Emre
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Estimating 3D human pose and shape (HPS) from a monocular image has many applications. However, collecting ground-truth data for this problem is costly and constrained to limited lab environments. Researchers have used priors based on body structure or kinematics, cues obtained from other vision tasks to mitigate the scarcity of supervision. Despite its apparent potential in this context, monocular depth estimation has yet to be explored. In this paper, we propose the Dense Depth Alignment (DDA) method, where we use an estimated dense depth map to create an auxiliary supervision signal for 3D HPS estimation. Specifically, we define a dense mapping between the points on the surface of the human mesh and the points reconstructed from depth estimation. We further introduce the idea of Camera Pretraining, a novel learning strategy where, instead of estimating all parameters simultaneously, learning of camera parameters is prioritized (before pose and shape parameters) to avoid unwanted local minima. Our experiments on Human3.6M and 3DPW datasets show that our DDA loss and Camera Pretraining significantly improve HPS estimation performance over using only 2D keypoint supervision or 2D and 3D supervision. Code will be provided for research purposes in the following URL: https://terteros.github.io/hmr-depth/ .
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-024-03491-9