pi$VAE: a stochastic process prior for Bayesian deep learning with MCMC

Stochastic processes provide a mathematically elegant way model complex data. In theory, they provide flexible priors over function classes that can encode a wide range of interesting assumptions. In practice, however, efficient inference by optimisation or marginalisation is difficult, a problem fu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Mishra, Swapnil, Flaxman, Seth, Berah, Tresnia, Zhu, Harrison, Pakkanen, Mikko, Bhatt, Samir
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Mishra, Swapnil
Flaxman, Seth
Berah, Tresnia
Zhu, Harrison
Pakkanen, Mikko
Bhatt, Samir
description Stochastic processes provide a mathematically elegant way model complex data. In theory, they provide flexible priors over function classes that can encode a wide range of interesting assumptions. In practice, however, efficient inference by optimisation or marginalisation is difficult, a problem further exacerbated with big data and high dimensional input spaces. We propose a novel variational autoencoder (VAE) called the prior encoding variational autoencoder ($\pi$VAE). The $\pi$VAE is finitely exchangeable and Kolmogorov consistent, and thus is a continuous stochastic process. We use $\pi$VAE to learn low dimensional embeddings of function classes. We show that our framework can accurately learn expressive function classes such as Gaussian processes, but also properties of functions to enable statistical inference (such as the integral of a log Gaussian process). For popular tasks, such as spatial interpolation, $\pi$VAE achieves state-of-the-art performance both in terms of accuracy and computational efficiency. Perhaps most usefully, we demonstrate that the low dimensional independently distributed latent space representation learnt provides an elegant and scalable means of performing Bayesian inference for stochastic processes within probabilistic programming languages such as Stan.
doi_str_mv 10.48550/arxiv.2002.06873
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_06873</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_06873</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-ce253c33cde1e4817b8365215b14784d5d529fcfb95a97b85f1f652de393c3553</originalsourceid><addsrcrecordid>eNotjzFPwzAUhL0woMIPYMJD1wQ7zkscthKVgtSKpeoavdjP1FJJIjsC-u8xheF0w-lO9zF2J0VeagDxgOHbf-aFEEUuKl2ra7aZ_PKwWj9y5HEezRHj7A2fwmgoxuR-DNwlPeGZoseBW6KJnwjD4Id3_uXnI9-1u_aGXTk8Rbr99wXbP6_37Uu2fdu8tqtthlWtMkMFKKOUsSSp1LLutaqgkNDLstalBQtF44zrG8AmheCkS7kl1aQagFqw-7_ZC0mX_n1gOHe_RN2FSP0Ab7xEzQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>pi$VAE: a stochastic process prior for Bayesian deep learning with MCMC</title><source>arXiv.org</source><creator>Mishra, Swapnil ; Flaxman, Seth ; Berah, Tresnia ; Zhu, Harrison ; Pakkanen, Mikko ; Bhatt, Samir</creator><creatorcontrib>Mishra, Swapnil ; Flaxman, Seth ; Berah, Tresnia ; Zhu, Harrison ; Pakkanen, Mikko ; Bhatt, Samir</creatorcontrib><description>Stochastic processes provide a mathematically elegant way model complex data. In theory, they provide flexible priors over function classes that can encode a wide range of interesting assumptions. In practice, however, efficient inference by optimisation or marginalisation is difficult, a problem further exacerbated with big data and high dimensional input spaces. We propose a novel variational autoencoder (VAE) called the prior encoding variational autoencoder ($\pi$VAE). The $\pi$VAE is finitely exchangeable and Kolmogorov consistent, and thus is a continuous stochastic process. We use $\pi$VAE to learn low dimensional embeddings of function classes. We show that our framework can accurately learn expressive function classes such as Gaussian processes, but also properties of functions to enable statistical inference (such as the integral of a log Gaussian process). For popular tasks, such as spatial interpolation, $\pi$VAE achieves state-of-the-art performance both in terms of accuracy and computational efficiency. Perhaps most usefully, we demonstrate that the low dimensional independently distributed latent space representation learnt provides an elegant and scalable means of performing Bayesian inference for stochastic processes within probabilistic programming languages such as Stan.</description><identifier>DOI: 10.48550/arxiv.2002.06873</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.06873$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.06873$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Mishra, Swapnil</creatorcontrib><creatorcontrib>Flaxman, Seth</creatorcontrib><creatorcontrib>Berah, Tresnia</creatorcontrib><creatorcontrib>Zhu, Harrison</creatorcontrib><creatorcontrib>Pakkanen, Mikko</creatorcontrib><creatorcontrib>Bhatt, Samir</creatorcontrib><title>pi$VAE: a stochastic process prior for Bayesian deep learning with MCMC</title><description>Stochastic processes provide a mathematically elegant way model complex data. In theory, they provide flexible priors over function classes that can encode a wide range of interesting assumptions. In practice, however, efficient inference by optimisation or marginalisation is difficult, a problem further exacerbated with big data and high dimensional input spaces. We propose a novel variational autoencoder (VAE) called the prior encoding variational autoencoder ($\pi$VAE). The $\pi$VAE is finitely exchangeable and Kolmogorov consistent, and thus is a continuous stochastic process. We use $\pi$VAE to learn low dimensional embeddings of function classes. We show that our framework can accurately learn expressive function classes such as Gaussian processes, but also properties of functions to enable statistical inference (such as the integral of a log Gaussian process). For popular tasks, such as spatial interpolation, $\pi$VAE achieves state-of-the-art performance both in terms of accuracy and computational efficiency. Perhaps most usefully, we demonstrate that the low dimensional independently distributed latent space representation learnt provides an elegant and scalable means of performing Bayesian inference for stochastic processes within probabilistic programming languages such as Stan.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjzFPwzAUhL0woMIPYMJD1wQ7zkscthKVgtSKpeoavdjP1FJJIjsC-u8xheF0w-lO9zF2J0VeagDxgOHbf-aFEEUuKl2ra7aZ_PKwWj9y5HEezRHj7A2fwmgoxuR-DNwlPeGZoseBW6KJnwjD4Id3_uXnI9-1u_aGXTk8Rbr99wXbP6_37Uu2fdu8tqtthlWtMkMFKKOUsSSp1LLutaqgkNDLstalBQtF44zrG8AmheCkS7kl1aQagFqw-7_ZC0mX_n1gOHe_RN2FSP0Ab7xEzQ</recordid><startdate>20200217</startdate><enddate>20200217</enddate><creator>Mishra, Swapnil</creator><creator>Flaxman, Seth</creator><creator>Berah, Tresnia</creator><creator>Zhu, Harrison</creator><creator>Pakkanen, Mikko</creator><creator>Bhatt, Samir</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200217</creationdate><title>pi$VAE: a stochastic process prior for Bayesian deep learning with MCMC</title><author>Mishra, Swapnil ; Flaxman, Seth ; Berah, Tresnia ; Zhu, Harrison ; Pakkanen, Mikko ; Bhatt, Samir</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-ce253c33cde1e4817b8365215b14784d5d529fcfb95a97b85f1f652de393c3553</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Mishra, Swapnil</creatorcontrib><creatorcontrib>Flaxman, Seth</creatorcontrib><creatorcontrib>Berah, Tresnia</creatorcontrib><creatorcontrib>Zhu, Harrison</creatorcontrib><creatorcontrib>Pakkanen, Mikko</creatorcontrib><creatorcontrib>Bhatt, Samir</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mishra, Swapnil</au><au>Flaxman, Seth</au><au>Berah, Tresnia</au><au>Zhu, Harrison</au><au>Pakkanen, Mikko</au><au>Bhatt, Samir</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>pi$VAE: a stochastic process prior for Bayesian deep learning with MCMC</atitle><date>2020-02-17</date><risdate>2020</risdate><abstract>Stochastic processes provide a mathematically elegant way model complex data. In theory, they provide flexible priors over function classes that can encode a wide range of interesting assumptions. In practice, however, efficient inference by optimisation or marginalisation is difficult, a problem further exacerbated with big data and high dimensional input spaces. We propose a novel variational autoencoder (VAE) called the prior encoding variational autoencoder ($\pi$VAE). The $\pi$VAE is finitely exchangeable and Kolmogorov consistent, and thus is a continuous stochastic process. We use $\pi$VAE to learn low dimensional embeddings of function classes. We show that our framework can accurately learn expressive function classes such as Gaussian processes, but also properties of functions to enable statistical inference (such as the integral of a log Gaussian process). For popular tasks, such as spatial interpolation, $\pi$VAE achieves state-of-the-art performance both in terms of accuracy and computational efficiency. Perhaps most usefully, we demonstrate that the low dimensional independently distributed latent space representation learnt provides an elegant and scalable means of performing Bayesian inference for stochastic processes within probabilistic programming languages such as Stan.</abstract><doi>10.48550/arxiv.2002.06873</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2002.06873
ispartof
issn
language eng
recordid cdi_arxiv_primary_2002_06873
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title pi$VAE: a stochastic process prior for Bayesian deep learning with MCMC
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T12%3A20%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=pi$VAE:%20a%20stochastic%20process%20prior%20for%20Bayesian%20deep%20learning%20with%20MCMC&rft.au=Mishra,%20Swapnil&rft.date=2020-02-17&rft_id=info:doi/10.48550/arxiv.2002.06873&rft_dat=%3Carxiv_GOX%3E2002_06873%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true