Leveraging Relational Information for Learning Weakly Disentangled Representations
Disentanglement is a difficult property to enforce in neural representations. This might be due, in part, to a formalization of the disentanglement problem that focuses too heavily on separating relevant factors of variation of the data in single isolated dimensions of the neural representation. We...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Disentanglement is a difficult property to enforce in neural representations.
This might be due, in part, to a formalization of the disentanglement problem
that focuses too heavily on separating relevant factors of variation of the
data in single isolated dimensions of the neural representation. We argue that
such a definition might be too restrictive and not necessarily beneficial in
terms of downstream tasks. In this work, we present an alternative view over
learning (weakly) disentangled representations, which leverages concepts from
relational learning. We identify the regions of the latent space that
correspond to specific instances of generative factors, and we learn the
relationships among these regions in order to perform controlled changes to the
latent codes. We also introduce a compound generative model that implements
such a weak disentanglement approach. Our experiments shows that the learned
representations can separate the relevant factors of variation in the data,
while preserving the information needed for effectively generating high quality
data samples. |
---|---|
DOI: | 10.48550/arxiv.2205.10056 |