WeLa-VAE: Learning Alternative Disentangled Representations Using Weak Labels
Learning disentangled representations without supervision or inductive biases, often leads to non-interpretable or undesirable representations. On the other hand, strict supervision requires detailed knowledge of the true generative factors, which is not always possible. In this paper, we consider w...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning disentangled representations without supervision or inductive
biases, often leads to non-interpretable or undesirable representations. On the
other hand, strict supervision requires detailed knowledge of the true
generative factors, which is not always possible. In this paper, we consider
weak supervision by means of high-level labels that are not assumed to be
explicitly related to the ground truth factors. Such labels, while being easier
to acquire, can also be used as inductive biases for algorithms to learn more
interpretable or alternative disentangled representations. To this end, we
propose WeLa-VAE, a variational inference framework where observations and
labels share the same latent variables, which involves the maximization of a
modified variational lower bound and total correlation regularization. Our
method is a generalization of TCVAE, adding only one extra hyperparameter. We
experiment on a dataset generated by Cartesian coordinates and we show that,
while a TCVAE learns a factorized Cartesian representation, given weak labels
of distance and angle, WeLa-VAE is able to learn and disentangle a polar
representation. This is achieved without the need of refined labels or having
to adjust the number of layers, the optimization parameters, or the total
correlation hyperparameter. |
---|---|
DOI: | 10.48550/arxiv.2008.09879 |