Learning undirected models via query training

Typical amortized inference in variational autoencoders is specialized for a single probabilistic query. Here we propose an inference network architecture that generalizes to unseen probabilistic queries. Instead of an encoder-decoder pair, we can train a single inference network directly from data,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lazaro-Gredilla, Miguel, Lehrach, Wolfgang, George, Dileep
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lazaro-Gredilla, Miguel
Lehrach, Wolfgang
George, Dileep
description Typical amortized inference in variational autoencoders is specialized for a single probabilistic query. Here we propose an inference network architecture that generalizes to unseen probabilistic queries. Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost function that is stochastic not only over samples, but also over queries. We can use this network to perform the same inference tasks as we would in an undirected graphical model with hidden variables, without having to deal with the intractable partition function. The results can be mapped to the learning of an actual undirected model, which is a notoriously hard problem. Our network also marginalizes nuisance variables as required. We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference. Experiments show that this approach outperforms or matches PCD and AdVIL on 9 benchmark datasets.
doi_str_mv 10.48550/arxiv.1912.02893
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1912_02893</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1912_02893</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-76cdf076c5593b20e15f5f1a7aedb180a5d131b57e2db39ff01f6f69162c640f3</originalsourceid><addsrcrecordid>eNotzs1uwjAQBGBfeqjSPkBP-AWSem3sxMcKlYIUqRfu0SbeRZZCAANR8_b89TJzGY0-IT5AFfPKWvWJ6S-OBXjQhdKVN68irwnTEIetvAwhJurOFORuH6g_yTGiPF4oTfKcMN5Hb-KFsT_R-39nYrP83ixWef37s1581Tm60uSl6wKrW1rrTasVgWXLgCVSaKFSaAMYaG1JOrTGMytgx86D052bKzaZmD1vH97mkOIO09Tc3c3Dba6OHD0y</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Learning undirected models via query training</title><source>arXiv.org</source><creator>Lazaro-Gredilla, Miguel ; Lehrach, Wolfgang ; George, Dileep</creator><creatorcontrib>Lazaro-Gredilla, Miguel ; Lehrach, Wolfgang ; George, Dileep</creatorcontrib><description>Typical amortized inference in variational autoencoders is specialized for a single probabilistic query. Here we propose an inference network architecture that generalizes to unseen probabilistic queries. Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost function that is stochastic not only over samples, but also over queries. We can use this network to perform the same inference tasks as we would in an undirected graphical model with hidden variables, without having to deal with the intractable partition function. The results can be mapped to the learning of an actual undirected model, which is a notoriously hard problem. Our network also marginalizes nuisance variables as required. We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference. Experiments show that this approach outperforms or matches PCD and AdVIL on 9 benchmark datasets.</description><identifier>DOI: 10.48550/arxiv.1912.02893</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1912.02893$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1912.02893$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lazaro-Gredilla, Miguel</creatorcontrib><creatorcontrib>Lehrach, Wolfgang</creatorcontrib><creatorcontrib>George, Dileep</creatorcontrib><title>Learning undirected models via query training</title><description>Typical amortized inference in variational autoencoders is specialized for a single probabilistic query. Here we propose an inference network architecture that generalizes to unseen probabilistic queries. Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost function that is stochastic not only over samples, but also over queries. We can use this network to perform the same inference tasks as we would in an undirected graphical model with hidden variables, without having to deal with the intractable partition function. The results can be mapped to the learning of an actual undirected model, which is a notoriously hard problem. Our network also marginalizes nuisance variables as required. We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference. Experiments show that this approach outperforms or matches PCD and AdVIL on 9 benchmark datasets.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzs1uwjAQBGBfeqjSPkBP-AWSem3sxMcKlYIUqRfu0SbeRZZCAANR8_b89TJzGY0-IT5AFfPKWvWJ6S-OBXjQhdKVN68irwnTEIetvAwhJurOFORuH6g_yTGiPF4oTfKcMN5Hb-KFsT_R-39nYrP83ixWef37s1581Tm60uSl6wKrW1rrTasVgWXLgCVSaKFSaAMYaG1JOrTGMytgx86D052bKzaZmD1vH97mkOIO09Tc3c3Dba6OHD0y</recordid><startdate>20191205</startdate><enddate>20191205</enddate><creator>Lazaro-Gredilla, Miguel</creator><creator>Lehrach, Wolfgang</creator><creator>George, Dileep</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20191205</creationdate><title>Learning undirected models via query training</title><author>Lazaro-Gredilla, Miguel ; Lehrach, Wolfgang ; George, Dileep</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-76cdf076c5593b20e15f5f1a7aedb180a5d131b57e2db39ff01f6f69162c640f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lazaro-Gredilla, Miguel</creatorcontrib><creatorcontrib>Lehrach, Wolfgang</creatorcontrib><creatorcontrib>George, Dileep</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lazaro-Gredilla, Miguel</au><au>Lehrach, Wolfgang</au><au>George, Dileep</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning undirected models via query training</atitle><date>2019-12-05</date><risdate>2019</risdate><abstract>Typical amortized inference in variational autoencoders is specialized for a single probabilistic query. Here we propose an inference network architecture that generalizes to unseen probabilistic queries. Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost function that is stochastic not only over samples, but also over queries. We can use this network to perform the same inference tasks as we would in an undirected graphical model with hidden variables, without having to deal with the intractable partition function. The results can be mapped to the learning of an actual undirected model, which is a notoriously hard problem. Our network also marginalizes nuisance variables as required. We show that our approach generalizes to unseen probabilistic queries on also unseen test data, providing fast and flexible inference. Experiments show that this approach outperforms or matches PCD and AdVIL on 9 benchmark datasets.</abstract><doi>10.48550/arxiv.1912.02893</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1912.02893
ispartof
issn
language eng
recordid cdi_arxiv_primary_1912_02893
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Learning undirected models via query training
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T09%3A23%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20undirected%20models%20via%20query%20training&rft.au=Lazaro-Gredilla,%20Miguel&rft.date=2019-12-05&rft_id=info:doi/10.48550/arxiv.1912.02893&rft_dat=%3Carxiv_GOX%3E1912_02893%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true