Light3DPose: Real-time Multi-Person 3D PoseEstimation from Multiple Views
We present an approach to perform 3D pose estimation of multiple people from a few calibrated camera views. Our architecture, leveraging the recently proposed unprojection layer, aggregates feature-maps from a 2D pose estimator backbone into a comprehensive representation of the 3D scene. Such inter...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present an approach to perform 3D pose estimation of multiple people from
a few calibrated camera views. Our architecture, leveraging the recently
proposed unprojection layer, aggregates feature-maps from a 2D pose estimator
backbone into a comprehensive representation of the 3D scene. Such intermediate
representation is then elaborated by a fully-convolutional volumetric network
and a decoding stage to extract 3D skeletons with sub-voxel accuracy. Our
method achieves state of the art MPJPE on the CMU Panoptic dataset using a few
unseen views and obtains competitive results even with a single input view. We
also assess the transfer learning capabilities of the model by testing it
against the publicly available Shelf dataset obtaining good performance
metrics. The proposed method is inherently efficient: as a pure bottom-up
approach, it is computationally independent of the number of people in the
scene. Furthermore, even though the computational burden of the 2D part scales
linearly with the number of input views, the overall architecture is able to
exploit a very lightweight 2D backbone which is orders of magnitude faster than
the volumetric counterpart, resulting in fast inference time. The system can
run at 6 FPS, processing up to 10 camera views on a single 1080Ti GPU. |
---|---|
DOI: | 10.48550/arxiv.2004.02688 |