Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces
Variational autoencoders learn unsupervised data representations, but these models frequently converge to minima that fail to preserve meaningful semantic information. For example, variational autoencoders with autoregressive decoders often collapse into autodecoders, where they learn to ignore the...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Seybold, Bryan Fertig, Emily Alemi, Alex Fischer, Ian |
description | Variational autoencoders learn unsupervised data representations, but these
models frequently converge to minima that fail to preserve meaningful semantic
information. For example, variational autoencoders with autoregressive decoders
often collapse into autodecoders, where they learn to ignore the encoder input.
In this work, we demonstrate that adding an auxiliary decoder to regularize the
latent space can prevent this collapse, but successful auxiliary decoding tasks
are domain dependent. Auxiliary decoders can increase the amount of semantic
information encoded in the latent space and visible in the reconstructions. The
semantic information in the variational autoencoder's representation is only
weakly correlated with its rate, distortion, or evidence lower bound. Compared
to other popular strategies that modify the training objective, our
regularization of the latent space generally increased the semantic information
content. |
doi_str_mv | 10.48550/arxiv.1905.07478 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_07478</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_07478</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-9f451889b0b1f698113c6c1e90025269100d8b3783ae6beb41f11ff49a0b8ecc3</originalsourceid><addsrcrecordid>eNotj81KxDAUhbNxIaMP4Mq8QGvu5KeJu2HqHxQEHdyWm8zNEKjtkLaiPr1OdXUOh48DH2NXIEpltRY3mD_TRwlO6FJUqrLnrK5n6lJ_4DWFYU95vOUvdJg7zOn7NL_9FpzS0GPHN_M0UL9gvMGJ-om_HjHQeMHOInYjXf7niu3u73bbx6J5fnjabpoCTWULF5UGa50XHqJxFkAGE4CcEGu9Ng6E2FsvKyuRjCevIALEqBwKbykEuWLXf7eLRnvM6R3zV3vSaRcd-QNKpkUY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces</title><source>arXiv.org</source><creator>Seybold, Bryan ; Fertig, Emily ; Alemi, Alex ; Fischer, Ian</creator><creatorcontrib>Seybold, Bryan ; Fertig, Emily ; Alemi, Alex ; Fischer, Ian</creatorcontrib><description>Variational autoencoders learn unsupervised data representations, but these
models frequently converge to minima that fail to preserve meaningful semantic
information. For example, variational autoencoders with autoregressive decoders
often collapse into autodecoders, where they learn to ignore the encoder input.
In this work, we demonstrate that adding an auxiliary decoder to regularize the
latent space can prevent this collapse, but successful auxiliary decoding tasks
are domain dependent. Auxiliary decoders can increase the amount of semantic
information encoded in the latent space and visible in the reconstructions. The
semantic information in the variational autoencoder's representation is only
weakly correlated with its rate, distortion, or evidence lower bound. Compared
to other popular strategies that modify the training objective, our
regularization of the latent space generally increased the semantic information
content.</description><identifier>DOI: 10.48550/arxiv.1905.07478</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.07478$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.07478$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Seybold, Bryan</creatorcontrib><creatorcontrib>Fertig, Emily</creatorcontrib><creatorcontrib>Alemi, Alex</creatorcontrib><creatorcontrib>Fischer, Ian</creatorcontrib><title>Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces</title><description>Variational autoencoders learn unsupervised data representations, but these
models frequently converge to minima that fail to preserve meaningful semantic
information. For example, variational autoencoders with autoregressive decoders
often collapse into autodecoders, where they learn to ignore the encoder input.
In this work, we demonstrate that adding an auxiliary decoder to regularize the
latent space can prevent this collapse, but successful auxiliary decoding tasks
are domain dependent. Auxiliary decoders can increase the amount of semantic
information encoded in the latent space and visible in the reconstructions. The
semantic information in the variational autoencoder's representation is only
weakly correlated with its rate, distortion, or evidence lower bound. Compared
to other popular strategies that modify the training objective, our
regularization of the latent space generally increased the semantic information
content.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81KxDAUhbNxIaMP4Mq8QGvu5KeJu2HqHxQEHdyWm8zNEKjtkLaiPr1OdXUOh48DH2NXIEpltRY3mD_TRwlO6FJUqrLnrK5n6lJ_4DWFYU95vOUvdJg7zOn7NL_9FpzS0GPHN_M0UL9gvMGJ-om_HjHQeMHOInYjXf7niu3u73bbx6J5fnjabpoCTWULF5UGa50XHqJxFkAGE4CcEGu9Ng6E2FsvKyuRjCevIALEqBwKbykEuWLXf7eLRnvM6R3zV3vSaRcd-QNKpkUY</recordid><startdate>20190517</startdate><enddate>20190517</enddate><creator>Seybold, Bryan</creator><creator>Fertig, Emily</creator><creator>Alemi, Alex</creator><creator>Fischer, Ian</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190517</creationdate><title>Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces</title><author>Seybold, Bryan ; Fertig, Emily ; Alemi, Alex ; Fischer, Ian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-9f451889b0b1f698113c6c1e90025269100d8b3783ae6beb41f11ff49a0b8ecc3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Seybold, Bryan</creatorcontrib><creatorcontrib>Fertig, Emily</creatorcontrib><creatorcontrib>Alemi, Alex</creatorcontrib><creatorcontrib>Fischer, Ian</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Seybold, Bryan</au><au>Fertig, Emily</au><au>Alemi, Alex</au><au>Fischer, Ian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces</atitle><date>2019-05-17</date><risdate>2019</risdate><abstract>Variational autoencoders learn unsupervised data representations, but these
models frequently converge to minima that fail to preserve meaningful semantic
information. For example, variational autoencoders with autoregressive decoders
often collapse into autodecoders, where they learn to ignore the encoder input.
In this work, we demonstrate that adding an auxiliary decoder to regularize the
latent space can prevent this collapse, but successful auxiliary decoding tasks
are domain dependent. Auxiliary decoders can increase the amount of semantic
information encoded in the latent space and visible in the reconstructions. The
semantic information in the variational autoencoder's representation is only
weakly correlated with its rate, distortion, or evidence lower bound. Compared
to other popular strategies that modify the training objective, our
regularization of the latent space generally increased the semantic information
content.</abstract><doi>10.48550/arxiv.1905.07478</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1905.07478 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1905_07478 |
source | arXiv.org |
subjects | Computer Science - Learning Statistics - Machine Learning |
title | Dueling Decoders: Regularizing Variational Autoencoder Latent Spaces |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T06%3A09%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dueling%20Decoders:%20Regularizing%20Variational%20Autoencoder%20Latent%20Spaces&rft.au=Seybold,%20Bryan&rft.date=2019-05-17&rft_id=info:doi/10.48550/arxiv.1905.07478&rft_dat=%3Carxiv_GOX%3E1905_07478%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |