Semi-supervised Disentanglement with Independent Vector Variational Autoencoders
We aim to separate the generative factors of data into two latent vectors in a variational autoencoder. One vector captures class factors relevant to target classification tasks, while the other vector captures style factors relevant to the remaining information. To learn the discrete class features...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We aim to separate the generative factors of data into two latent vectors in
a variational autoencoder. One vector captures class factors relevant to target
classification tasks, while the other vector captures style factors relevant to
the remaining information. To learn the discrete class features, we introduce
supervision using a small amount of labeled data, which can simply yet
effectively reduce the effort required for hyperparameter tuning performed in
existing unsupervised methods. Furthermore, we introduce a learning objective
to encourage statistical independence between the vectors. We show that (i)
this vector independence term exists within the result obtained on decomposing
the evidence lower bound with multiple latent vectors, and (ii) encouraging
such independence along with reducing the total correlation within the vectors
enhances disentanglement performance. Experiments conducted on several image
datasets demonstrate that the disentanglement achieved via our method can
improve classification performance and generation controllability. |
---|---|
DOI: | 10.48550/arxiv.2003.06581 |