TC-VAE: Uncovering Out-of-Distribution Data Generative Factors
Uncovering data generative factors is the ultimate goal of disentanglement learning. Although many works proposed disentangling generative models able to uncover the underlying generative factors of a dataset, so far no one was able to uncover OOD generative factors (i.e., factors of variations that...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Uncovering data generative factors is the ultimate goal of disentanglement
learning. Although many works proposed disentangling generative models able to
uncover the underlying generative factors of a dataset, so far no one was able
to uncover OOD generative factors (i.e., factors of variations that are not
explicitly shown on the dataset). Moreover, the datasets used to validate these
models are synthetically generated using a balanced mixture of some predefined
generative factors, implicitly assuming that generative factors are uniformly
distributed across the datasets. However, real datasets do not present this
property. In this work we analyse the effect of using datasets with unbalanced
generative factors, providing qualitative and quantitative results for widely
used generative models. Moreover, we propose TC-VAE, a generative model
optimized using a lower bound of the joint total correlation between the
learned latent representations and the input data. We show that the proposed
model is able to uncover OOD generative factors on different datasets and
outperforms on average the related baselines in terms of downstream
disentanglement metrics. |
---|---|
DOI: | 10.48550/arxiv.2304.04103 |