Learning Pose Specific Representations by Predicting Different Views
The labeled data required to learn pose estimation for articulated objects is difficult to provide in the desired quantity, realism, density, and accuracy. To address this issue, we develop a method to learn representations, which are very specific for articulated poses, without the need for labeled...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The labeled data required to learn pose estimation for articulated objects is
difficult to provide in the desired quantity, realism, density, and accuracy.
To address this issue, we develop a method to learn representations, which are
very specific for articulated poses, without the need for labeled training
data. We exploit the observation that the object pose of a known object is
predictive for the appearance in any known view. That is, given only the pose
and shape parameters of a hand, the hand's appearance from any viewpoint can be
approximated. To exploit this observation, we train a model that -- given input
from one view -- estimates a latent representation, which is trained to be
predictive for the appearance of the object when captured from another
viewpoint. Thus, the only necessary supervision is the second view. The
training process of this model reveals an implicit pose representation in the
latent space. Importantly, at test time the pose representation can be inferred
using only a single view. In qualitative and quantitative experiments we show
that the learned representations capture detailed pose information. Moreover,
when training the proposed method jointly with labeled and unlabeled data, it
consistently surpasses the performance of its fully supervised counterpart,
while reducing the amount of needed labeled samples by at least one order of
magnitude. |
---|---|
DOI: | 10.48550/arxiv.1804.03390 |