Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces
Variational autoencoders learn unsupervised data representations, but these models frequently converge to minima that fail to preserve meaningful semantic information. For example, variational autoencoders with autoregressive decoders often collapse into autodecoders, where they learn to ignore the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Variational autoencoders learn unsupervised data representations, but these
models frequently converge to minima that fail to preserve meaningful semantic
information. For example, variational autoencoders with autoregressive decoders
often collapse into autodecoders, where they learn to ignore the encoder input.
In this work, we demonstrate that adding an auxiliary decoder to regularize the
latent space can prevent this collapse, but successful auxiliary decoding tasks
are domain dependent. Auxiliary decoders can increase the amount of semantic
information encoded in the latent space and visible in the reconstructions. The
semantic information in the variational autoencoder's representation is only
weakly correlated with its rate, distortion, or evidence lower bound. Compared
to other popular strategies that modify the training objective, our
regularization of the latent space generally increased the semantic information
content. |
---|---|
DOI: | 10.48550/arxiv.1905.07478 |