Polarized-VAE: Proximity Based Disentangled Representation Learning for Text Generation
Learning disentangled representations of real-world data is a challenging open problem. Most previous methods have focused on either supervised approaches which use attribute labels or unsupervised approaches that manipulate the factorization in the latent space of models such as the variational aut...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning disentangled representations of real-world data is a challenging
open problem. Most previous methods have focused on either supervised
approaches which use attribute labels or unsupervised approaches that
manipulate the factorization in the latent space of models such as the
variational autoencoder (VAE) by training with task-specific losses. In this
work, we propose polarized-VAE, an approach that disentangles select attributes
in the latent space based on proximity measures reflecting the similarity
between data points with respect to these attributes. We apply our method to
disentangle the semantics and syntax of sentences and carry out transfer
experiments. Polarized-VAE outperforms the VAE baseline and is competitive with
state-of-the-art approaches, while being more a general framework that is
applicable to other attribute disentanglement tasks. |
---|---|
DOI: | 10.48550/arxiv.2004.10809 |