Neural Gaussian Copula for Variational Autoencoder
EMNLP 2019 Variational language models seek to estimate the posterior of latent variables with an approximated variational posterior. The model often assumes the variational posterior to be factorized even when the true posterior is not. The learned variational posterior under this assumption does n...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wang, Prince Zizhuang Wang, William Yang |
description | EMNLP 2019 Variational language models seek to estimate the posterior of latent
variables with an approximated variational posterior. The model often assumes
the variational posterior to be factorized even when the true posterior is not.
The learned variational posterior under this assumption does not capture the
dependency relationships over latent variables. We argue that this would cause
a typical training problem called posterior collapse observed in all other
variational language models. We propose Gaussian Copula Variational Autoencoder
(VAE) to avert this problem. Copula is widely used to model correlation and
dependencies of high-dimensional random variables, and therefore it is helpful
to maintain the dependency relationships that are lost in VAE. The empirical
results show that by modeling the correlation of latent variables explicitly
using a neural parametric copula, we can avert this training difficulty while
getting competitive results among all other VAE approaches. |
doi_str_mv | 10.48550/arxiv.1909.03569 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1909_03569</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1909_03569</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-f615207a6b37135c28e227268a4fbc0a640ba31b063818bd17f03ec3b909a2893</originalsourceid><addsrcrecordid>eNotzr1uwjAUhmEvDAi4AKbmBpIe-8R_I4paqITaBXWNjoMtWUpj5BBE756fdvqGV_r0MLbmUNVGSnilfI2XiluwFaBUds7Ep58y9cWWpnGMNBRNOk09FSHl4ptypHNMw71vpnPyQ5eOPi_ZLFA_-tX_Ltjh_e3Q7Mr91_aj2exLUtqWQXEpQJNyqDnKThgvhBbKUB1cB6RqcITcgULDjTtyHQB9h-6OI2EsLtjL3-0T3Z5y_KH82z7w7ROPN5AOPdw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Neural Gaussian Copula for Variational Autoencoder</title><source>arXiv.org</source><creator>Wang, Prince Zizhuang ; Wang, William Yang</creator><creatorcontrib>Wang, Prince Zizhuang ; Wang, William Yang</creatorcontrib><description>EMNLP 2019 Variational language models seek to estimate the posterior of latent
variables with an approximated variational posterior. The model often assumes
the variational posterior to be factorized even when the true posterior is not.
The learned variational posterior under this assumption does not capture the
dependency relationships over latent variables. We argue that this would cause
a typical training problem called posterior collapse observed in all other
variational language models. We propose Gaussian Copula Variational Autoencoder
(VAE) to avert this problem. Copula is widely used to model correlation and
dependencies of high-dimensional random variables, and therefore it is helpful
to maintain the dependency relationships that are lost in VAE. The empirical
results show that by modeling the correlation of latent variables explicitly
using a neural parametric copula, we can avert this training difficulty while
getting competitive results among all other VAE approaches.</description><identifier>DOI: 10.48550/arxiv.1909.03569</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing ; Statistics - Machine Learning</subject><creationdate>2019-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1909.03569$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1909.03569$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Prince Zizhuang</creatorcontrib><creatorcontrib>Wang, William Yang</creatorcontrib><title>Neural Gaussian Copula for Variational Autoencoder</title><description>EMNLP 2019 Variational language models seek to estimate the posterior of latent
variables with an approximated variational posterior. The model often assumes
the variational posterior to be factorized even when the true posterior is not.
The learned variational posterior under this assumption does not capture the
dependency relationships over latent variables. We argue that this would cause
a typical training problem called posterior collapse observed in all other
variational language models. We propose Gaussian Copula Variational Autoencoder
(VAE) to avert this problem. Copula is widely used to model correlation and
dependencies of high-dimensional random variables, and therefore it is helpful
to maintain the dependency relationships that are lost in VAE. The empirical
results show that by modeling the correlation of latent variables explicitly
using a neural parametric copula, we can avert this training difficulty while
getting competitive results among all other VAE approaches.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzr1uwjAUhmEvDAi4AKbmBpIe-8R_I4paqITaBXWNjoMtWUpj5BBE756fdvqGV_r0MLbmUNVGSnilfI2XiluwFaBUds7Ep58y9cWWpnGMNBRNOk09FSHl4ptypHNMw71vpnPyQ5eOPi_ZLFA_-tX_Ltjh_e3Q7Mr91_aj2exLUtqWQXEpQJNyqDnKThgvhBbKUB1cB6RqcITcgULDjTtyHQB9h-6OI2EsLtjL3-0T3Z5y_KH82z7w7ROPN5AOPdw</recordid><startdate>20190908</startdate><enddate>20190908</enddate><creator>Wang, Prince Zizhuang</creator><creator>Wang, William Yang</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190908</creationdate><title>Neural Gaussian Copula for Variational Autoencoder</title><author>Wang, Prince Zizhuang ; Wang, William Yang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-f615207a6b37135c28e227268a4fbc0a640ba31b063818bd17f03ec3b909a2893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Prince Zizhuang</creatorcontrib><creatorcontrib>Wang, William Yang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Prince Zizhuang</au><au>Wang, William Yang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Gaussian Copula for Variational Autoencoder</atitle><date>2019-09-08</date><risdate>2019</risdate><abstract>EMNLP 2019 Variational language models seek to estimate the posterior of latent
variables with an approximated variational posterior. The model often assumes
the variational posterior to be factorized even when the true posterior is not.
The learned variational posterior under this assumption does not capture the
dependency relationships over latent variables. We argue that this would cause
a typical training problem called posterior collapse observed in all other
variational language models. We propose Gaussian Copula Variational Autoencoder
(VAE) to avert this problem. Copula is widely used to model correlation and
dependencies of high-dimensional random variables, and therefore it is helpful
to maintain the dependency relationships that are lost in VAE. The empirical
results show that by modeling the correlation of latent variables explicitly
using a neural parametric copula, we can avert this training difficulty while
getting competitive results among all other VAE approaches.</abstract><doi>10.48550/arxiv.1909.03569</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1909.03569 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1909_03569 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning Computer Science - Neural and Evolutionary Computing Statistics - Machine Learning |
title | Neural Gaussian Copula for Variational Autoencoder |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T07%3A18%3A50IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Gaussian%20Copula%20for%20Variational%20Autoencoder&rft.au=Wang,%20Prince%20Zizhuang&rft.date=2019-09-08&rft_id=info:doi/10.48550/arxiv.1909.03569&rft_dat=%3Carxiv_GOX%3E1909_03569%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |