Learning Latent Superstructures in Variational Autoencoders for Deep Multidimensional Clustering
We investigate a variant of variational autoencoders where there is a superstructure of discrete latent variables on top of the latent features. In general, our superstructure is a tree structure of multiple super latent variables and it is automatically learned from data. When there is only one lat...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We investigate a variant of variational autoencoders where there is a
superstructure of discrete latent variables on top of the latent features. In
general, our superstructure is a tree structure of multiple super latent
variables and it is automatically learned from data. When there is only one
latent variable in the superstructure, our model reduces to one that assumes
the latent features to be generated from a Gaussian mixture model. We call our
model the latent tree variational autoencoder (LTVAE). Whereas previous deep
learning methods for clustering produce only one partition of data, LTVAE
produces multiple partitions of data, each being given by one super latent
variable. This is desirable because high dimensional data usually have many
different natural facets and can be meaningfully partitioned in multiple ways. |
---|---|
DOI: | 10.48550/arxiv.1803.05206 |