Distilling Importance Sampling for Likelihood Free Inference

Likelihood-free inference involves inferring parameter values given observed data and a simulator model. The simulator is computer code which takes parameters, performs stochastic calculations, and outputs simulated data. In this work, we view the simulator as a function whose inputs are (1) the par...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Prangle, Dennis, Viscardi, Cecilia
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Prangle, Dennis
Viscardi, Cecilia
description Likelihood-free inference involves inferring parameter values given observed data and a simulator model. The simulator is computer code which takes parameters, performs stochastic calculations, and outputs simulated data. In this work, we view the simulator as a function whose inputs are (1) the parameters and (2) a vector of pseudo-random draws. We attempt to infer all these inputs conditional on the observations. This is challenging as the resulting posterior can be high dimensional and involve strong dependence. We approximate the posterior using normalizing flows, a flexible parametric family of densities. Training data is generated by likelihood-free importance sampling with a large bandwidth value epsilon, which makes the target similar to the prior. The training data is "distilled" by using it to train an updated normalizing flow. The process is iterated, using the updated flow as the importance sampling proposal, and slowly reducing epsilon so the target becomes closer to the posterior. Unlike most other likelihood-free methods, we avoid the need to reduce data to low dimensional summary statistics, and hence can achieve more accurate results. We illustrate our method in two challenging examples, on queuing and epidemiology.
doi_str_mv 10.48550/arxiv.1910.03632
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1910_03632</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1910_03632</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-49ab9f1231cbdb5feff95ccbf029b627b2f9fccccc87e8ce97b0d666211410fb3</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXpIrj5gKyqH7Cjhy1b0E1xmtZg6KLeG0m-txHxCzmU9O-buJ3NwGEYZgjZcZakRZaxvQlX_51wfQNMKik25Pngl4vvez9-0WqYp3AxowP6aYZ5ZTgFWvsz9P40TR09BgBajQgBbrFH8oCmX2D77xFpjq9N-R7XH29V-VLHRuUiTrWxGrmQ3NnOZgiIOnPOIhPaKpFbgRrdXUUOhQOdW9YppQTnKWdoZUSe_mrX-e0c_GDCT3u_0a435C86xUOH</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Distilling Importance Sampling for Likelihood Free Inference</title><source>arXiv.org</source><creator>Prangle, Dennis ; Viscardi, Cecilia</creator><creatorcontrib>Prangle, Dennis ; Viscardi, Cecilia</creatorcontrib><description>Likelihood-free inference involves inferring parameter values given observed data and a simulator model. The simulator is computer code which takes parameters, performs stochastic calculations, and outputs simulated data. In this work, we view the simulator as a function whose inputs are (1) the parameters and (2) a vector of pseudo-random draws. We attempt to infer all these inputs conditional on the observations. This is challenging as the resulting posterior can be high dimensional and involve strong dependence. We approximate the posterior using normalizing flows, a flexible parametric family of densities. Training data is generated by likelihood-free importance sampling with a large bandwidth value epsilon, which makes the target similar to the prior. The training data is "distilled" by using it to train an updated normalizing flow. The process is iterated, using the updated flow as the importance sampling proposal, and slowly reducing epsilon so the target becomes closer to the posterior. Unlike most other likelihood-free methods, we avoid the need to reduce data to low dimensional summary statistics, and hence can achieve more accurate results. We illustrate our method in two challenging examples, on queuing and epidemiology.</description><identifier>DOI: 10.48550/arxiv.1910.03632</identifier><language>eng</language><subject>Statistics - Computation ; Statistics - Machine Learning</subject><creationdate>2019-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1910.03632$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1910.03632$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Prangle, Dennis</creatorcontrib><creatorcontrib>Viscardi, Cecilia</creatorcontrib><title>Distilling Importance Sampling for Likelihood Free Inference</title><description>Likelihood-free inference involves inferring parameter values given observed data and a simulator model. The simulator is computer code which takes parameters, performs stochastic calculations, and outputs simulated data. In this work, we view the simulator as a function whose inputs are (1) the parameters and (2) a vector of pseudo-random draws. We attempt to infer all these inputs conditional on the observations. This is challenging as the resulting posterior can be high dimensional and involve strong dependence. We approximate the posterior using normalizing flows, a flexible parametric family of densities. Training data is generated by likelihood-free importance sampling with a large bandwidth value epsilon, which makes the target similar to the prior. The training data is "distilled" by using it to train an updated normalizing flow. The process is iterated, using the updated flow as the importance sampling proposal, and slowly reducing epsilon so the target becomes closer to the posterior. Unlike most other likelihood-free methods, we avoid the need to reduce data to low dimensional summary statistics, and hence can achieve more accurate results. We illustrate our method in two challenging examples, on queuing and epidemiology.</description><subject>Statistics - Computation</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXpIrj5gKyqH7Cjhy1b0E1xmtZg6KLeG0m-txHxCzmU9O-buJ3NwGEYZgjZcZakRZaxvQlX_51wfQNMKik25Pngl4vvez9-0WqYp3AxowP6aYZ5ZTgFWvsz9P40TR09BgBajQgBbrFH8oCmX2D77xFpjq9N-R7XH29V-VLHRuUiTrWxGrmQ3NnOZgiIOnPOIhPaKpFbgRrdXUUOhQOdW9YppQTnKWdoZUSe_mrX-e0c_GDCT3u_0a435C86xUOH</recordid><startdate>20191008</startdate><enddate>20191008</enddate><creator>Prangle, Dennis</creator><creator>Viscardi, Cecilia</creator><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20191008</creationdate><title>Distilling Importance Sampling for Likelihood Free Inference</title><author>Prangle, Dennis ; Viscardi, Cecilia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-49ab9f1231cbdb5feff95ccbf029b627b2f9fccccc87e8ce97b0d666211410fb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Statistics - Computation</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Prangle, Dennis</creatorcontrib><creatorcontrib>Viscardi, Cecilia</creatorcontrib><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Prangle, Dennis</au><au>Viscardi, Cecilia</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Distilling Importance Sampling for Likelihood Free Inference</atitle><date>2019-10-08</date><risdate>2019</risdate><abstract>Likelihood-free inference involves inferring parameter values given observed data and a simulator model. The simulator is computer code which takes parameters, performs stochastic calculations, and outputs simulated data. In this work, we view the simulator as a function whose inputs are (1) the parameters and (2) a vector of pseudo-random draws. We attempt to infer all these inputs conditional on the observations. This is challenging as the resulting posterior can be high dimensional and involve strong dependence. We approximate the posterior using normalizing flows, a flexible parametric family of densities. Training data is generated by likelihood-free importance sampling with a large bandwidth value epsilon, which makes the target similar to the prior. The training data is "distilled" by using it to train an updated normalizing flow. The process is iterated, using the updated flow as the importance sampling proposal, and slowly reducing epsilon so the target becomes closer to the posterior. Unlike most other likelihood-free methods, we avoid the need to reduce data to low dimensional summary statistics, and hence can achieve more accurate results. We illustrate our method in two challenging examples, on queuing and epidemiology.</abstract><doi>10.48550/arxiv.1910.03632</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1910.03632
ispartof
issn
language eng
recordid cdi_arxiv_primary_1910_03632
source arXiv.org
subjects Statistics - Computation
Statistics - Machine Learning
title Distilling Importance Sampling for Likelihood Free Inference
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T03%3A01%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Distilling%20Importance%20Sampling%20for%20Likelihood%20Free%20Inference&rft.au=Prangle,%20Dennis&rft.date=2019-10-08&rft_id=info:doi/10.48550/arxiv.1910.03632&rft_dat=%3Carxiv_GOX%3E1910_03632%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true