Doubly Semi-Implicit Variational Inference

We extend the existing framework of semi-implicit variational inference (SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way to perform variational inference and learning when both the approximate posterior and the prior distribution are semi-implicit. In other words, DSIVI...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Molchanov, Dmitry, Kharitonov, Valery, Sobolev, Artem, Vetrov, Dmitry
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Molchanov, Dmitry
Kharitonov, Valery
Sobolev, Artem
Vetrov, Dmitry
description We extend the existing framework of semi-implicit variational inference (SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way to perform variational inference and learning when both the approximate posterior and the prior distribution are semi-implicit. In other words, DSIVI performs inference in models where the prior and the posterior can be expressed as an intractable infinite mixture of some analytic density with a highly flexible implicit mixing distribution. We provide a sandwich bound on the evidence lower bound (ELBO) objective that can be made arbitrarily tight. Unlike discriminator-based and kernel-based approaches to implicit variational inference, DSIVI optimizes a proper lower bound on ELBO that is asymptotically exact. We evaluate DSIVI on a set of problems that benefit from implicit priors. In particular, we show that DSIVI gives rise to a simple modification of VampPrior, the current state-of-the-art prior for variational autoencoders, which improves its performance.
doi_str_mv 10.48550/arxiv.1810.02789
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1810_02789</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1810_02789</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-603053341138a359c2e9617288383ad0cbd47722055cbf2422241745d2c800553</originalsourceid><addsrcrecordid>eNotzjsLwjAUhuEsDqL-ACc7C9XknKRJR_FaEBwU13KaphBoq8QL-u-9Th-8w8fD2FDwiTRK8SmFh79PhHkHDtqkXTZenG5F_Yz2rvFx1pxrb_01OlLwdPWnluooaysXXGtdn3Uqqi9u8N8eO6yWh_km3u7W2Xy2jSnRaZxw5ApRCoGGUKUWXJoIDcagQSq5LUqpNQBXyhYVSACQQktVgjX8HbHHRr_bLzY_B99QeOYfdP5F4wvKoTln</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Doubly Semi-Implicit Variational Inference</title><source>arXiv.org</source><creator>Molchanov, Dmitry ; Kharitonov, Valery ; Sobolev, Artem ; Vetrov, Dmitry</creator><creatorcontrib>Molchanov, Dmitry ; Kharitonov, Valery ; Sobolev, Artem ; Vetrov, Dmitry</creatorcontrib><description>We extend the existing framework of semi-implicit variational inference (SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way to perform variational inference and learning when both the approximate posterior and the prior distribution are semi-implicit. In other words, DSIVI performs inference in models where the prior and the posterior can be expressed as an intractable infinite mixture of some analytic density with a highly flexible implicit mixing distribution. We provide a sandwich bound on the evidence lower bound (ELBO) objective that can be made arbitrarily tight. Unlike discriminator-based and kernel-based approaches to implicit variational inference, DSIVI optimizes a proper lower bound on ELBO that is asymptotically exact. We evaluate DSIVI on a set of problems that benefit from implicit priors. In particular, we show that DSIVI gives rise to a simple modification of VampPrior, the current state-of-the-art prior for variational autoencoders, which improves its performance.</description><identifier>DOI: 10.48550/arxiv.1810.02789</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2018-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1810.02789$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1810.02789$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Molchanov, Dmitry</creatorcontrib><creatorcontrib>Kharitonov, Valery</creatorcontrib><creatorcontrib>Sobolev, Artem</creatorcontrib><creatorcontrib>Vetrov, Dmitry</creatorcontrib><title>Doubly Semi-Implicit Variational Inference</title><description>We extend the existing framework of semi-implicit variational inference (SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way to perform variational inference and learning when both the approximate posterior and the prior distribution are semi-implicit. In other words, DSIVI performs inference in models where the prior and the posterior can be expressed as an intractable infinite mixture of some analytic density with a highly flexible implicit mixing distribution. We provide a sandwich bound on the evidence lower bound (ELBO) objective that can be made arbitrarily tight. Unlike discriminator-based and kernel-based approaches to implicit variational inference, DSIVI optimizes a proper lower bound on ELBO that is asymptotically exact. We evaluate DSIVI on a set of problems that benefit from implicit priors. In particular, we show that DSIVI gives rise to a simple modification of VampPrior, the current state-of-the-art prior for variational autoencoders, which improves its performance.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzjsLwjAUhuEsDqL-ACc7C9XknKRJR_FaEBwU13KaphBoq8QL-u-9Th-8w8fD2FDwiTRK8SmFh79PhHkHDtqkXTZenG5F_Yz2rvFx1pxrb_01OlLwdPWnluooaysXXGtdn3Uqqi9u8N8eO6yWh_km3u7W2Xy2jSnRaZxw5ApRCoGGUKUWXJoIDcagQSq5LUqpNQBXyhYVSACQQktVgjX8HbHHRr_bLzY_B99QeOYfdP5F4wvKoTln</recordid><startdate>20181005</startdate><enddate>20181005</enddate><creator>Molchanov, Dmitry</creator><creator>Kharitonov, Valery</creator><creator>Sobolev, Artem</creator><creator>Vetrov, Dmitry</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20181005</creationdate><title>Doubly Semi-Implicit Variational Inference</title><author>Molchanov, Dmitry ; Kharitonov, Valery ; Sobolev, Artem ; Vetrov, Dmitry</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-603053341138a359c2e9617288383ad0cbd47722055cbf2422241745d2c800553</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Molchanov, Dmitry</creatorcontrib><creatorcontrib>Kharitonov, Valery</creatorcontrib><creatorcontrib>Sobolev, Artem</creatorcontrib><creatorcontrib>Vetrov, Dmitry</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Molchanov, Dmitry</au><au>Kharitonov, Valery</au><au>Sobolev, Artem</au><au>Vetrov, Dmitry</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Doubly Semi-Implicit Variational Inference</atitle><date>2018-10-05</date><risdate>2018</risdate><abstract>We extend the existing framework of semi-implicit variational inference (SIVI) and introduce doubly semi-implicit variational inference (DSIVI), a way to perform variational inference and learning when both the approximate posterior and the prior distribution are semi-implicit. In other words, DSIVI performs inference in models where the prior and the posterior can be expressed as an intractable infinite mixture of some analytic density with a highly flexible implicit mixing distribution. We provide a sandwich bound on the evidence lower bound (ELBO) objective that can be made arbitrarily tight. Unlike discriminator-based and kernel-based approaches to implicit variational inference, DSIVI optimizes a proper lower bound on ELBO that is asymptotically exact. We evaluate DSIVI on a set of problems that benefit from implicit priors. In particular, we show that DSIVI gives rise to a simple modification of VampPrior, the current state-of-the-art prior for variational autoencoders, which improves its performance.</abstract><doi>10.48550/arxiv.1810.02789</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1810.02789
ispartof
issn
language eng
recordid cdi_arxiv_primary_1810_02789
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Doubly Semi-Implicit Variational Inference
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T16%3A26%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Doubly%20Semi-Implicit%20Variational%20Inference&rft.au=Molchanov,%20Dmitry&rft.date=2018-10-05&rft_id=info:doi/10.48550/arxiv.1810.02789&rft_dat=%3Carxiv_GOX%3E1810_02789%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true