Towards end‐to‐end likelihood‐free inference with convolutional neural networks

Complex simulator‐based models with non‐standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end‐to‐end approach for approximate Bayesian computation (ABC) based on fully convolutional neural networks. The method enab...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:British journal of mathematical & statistical psychology 2020-02, Vol.73 (1), p.23-43
Hauptverfasser: Radev, Stefan T., Mertens, Ulf K., Voss, Andreas, Köthe, Ullrich
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 43
container_issue 1
container_start_page 23
container_title British journal of mathematical & statistical psychology
container_volume 73
creator Radev, Stefan T.
Mertens, Ulf K.
Voss, Andreas
Köthe, Ullrich
description Complex simulator‐based models with non‐standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end‐to‐end approach for approximate Bayesian computation (ABC) based on fully convolutional neural networks. The method enables users of ABC to derive simultaneously the posterior mean and variance of multidimensional posterior distributions directly from raw simulated data. Once trained on simulated data, the convolutional neural network is able to map real data samples of variable size to the first two posterior moments of the relevant parameter's distributions. Thus, in contrast to other machine learning approaches to ABC, our approach allows us to generate reusable models that can be applied by different researchers employing the same model. We verify the utility of our method on two common statistical models (i.e., a multivariate normal distribution and a multiple regression scenario), for which the posterior parameter distributions can be derived analytically. We then apply our method to recover the parameters of the leaky competing accumulator (LCA) model and we reference our results to the current state‐of‐the‐art technique, which is the probability density estimation (PDA). Results show that our method exhibits a lower approximation error compared with other machine learning approaches to ABC. It also performs similarly to PDA in recovering the parameters of the LCA model.
doi_str_mv 10.1111/bmsp.12159
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2185549931</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2185549931</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3579-b3c7e04bf3cff2d8f601b8ab3542f3161e4a3efd46932d3be412f8c55a4bbd773</originalsourceid><addsrcrecordid>eNp9kM1Kw0AQxxdRbK1efAAJeBEhdT_zcdTiF1QUbM8hm8zStEm27iaW3nwEn9EncdtUDx6cw8ww_Pgx_BE6JXhIXF3Jyi6HhBIR76E-xZz7ESPhPupjjEOfEEx76MjaOcaEChwcoh7DYcxoHPfRdKJXqcmtB3X-9fHZaNfc6pXFAspipvXmqgyAV9QKDNQZeKuimXmZrt912TaFrtPSq6E129GstFnYY3Sg0tLCyW4O0PTudjJ68MfP94-j67GfMRHGvmRZCJhLxTKlaB6pABMZpZIJThUjAQGeMlA5D9y3OZPACVVRJkTKpczDkA3QReddGv3Wgm2SqrAZlGVag25tQkkkBI9jRhx6_ged69a43x3FuKBhQOON8LKjMqOtNaCSpSmq1KwTgpNN2Mkm7GQbtoPPdspWVpD_oj_pOoB0wKooYf2PKrl5en3ppN9GO42s</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2345276297</pqid></control><display><type>article</type><title>Towards end‐to‐end likelihood‐free inference with convolutional neural networks</title><source>Wiley Online Library - AutoHoldings Journals</source><source>MEDLINE</source><creator>Radev, Stefan T. ; Mertens, Ulf K. ; Voss, Andreas ; Köthe, Ullrich</creator><creatorcontrib>Radev, Stefan T. ; Mertens, Ulf K. ; Voss, Andreas ; Köthe, Ullrich</creatorcontrib><description>Complex simulator‐based models with non‐standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end‐to‐end approach for approximate Bayesian computation (ABC) based on fully convolutional neural networks. The method enables users of ABC to derive simultaneously the posterior mean and variance of multidimensional posterior distributions directly from raw simulated data. Once trained on simulated data, the convolutional neural network is able to map real data samples of variable size to the first two posterior moments of the relevant parameter's distributions. Thus, in contrast to other machine learning approaches to ABC, our approach allows us to generate reusable models that can be applied by different researchers employing the same model. We verify the utility of our method on two common statistical models (i.e., a multivariate normal distribution and a multiple regression scenario), for which the posterior parameter distributions can be derived analytically. We then apply our method to recover the parameters of the leaky competing accumulator (LCA) model and we reference our results to the current state‐of‐the‐art technique, which is the probability density estimation (PDA). Results show that our method exhibits a lower approximation error compared with other machine learning approaches to ABC. It also performs similarly to PDA in recovering the parameters of the LCA model.</description><identifier>ISSN: 0007-1102</identifier><identifier>EISSN: 2044-8317</identifier><identifier>DOI: 10.1111/bmsp.12159</identifier><identifier>PMID: 30793299</identifier><language>eng</language><publisher>England: British Psychological Society</publisher><subject>Accumulators ; Algorithms ; approximate Bayesian computation ; Artificial neural networks ; Bayes Theorem ; Computer Simulation ; convolutional network ; Humans ; Inference ; leaky competing accumulator ; Likelihood Functions ; likelihood‐free inference ; Machine Learning ; Neural networks ; Neural Networks, Computer ; Normal distribution ; Parameters ; Regression Analysis ; Statistical analysis ; Statistical methods ; Statistical models</subject><ispartof>British journal of mathematical &amp; statistical psychology, 2020-02, Vol.73 (1), p.23-43</ispartof><rights>2019 The British Psychological Society</rights><rights>2019 The British Psychological Society.</rights><rights>Copyright © 2020 The British Psychological Society</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3579-b3c7e04bf3cff2d8f601b8ab3542f3161e4a3efd46932d3be412f8c55a4bbd773</citedby><cites>FETCH-LOGICAL-c3579-b3c7e04bf3cff2d8f601b8ab3542f3161e4a3efd46932d3be412f8c55a4bbd773</cites><orcidid>0000-0002-6702-9559</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fbmsp.12159$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fbmsp.12159$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,45574,45575</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30793299$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Radev, Stefan T.</creatorcontrib><creatorcontrib>Mertens, Ulf K.</creatorcontrib><creatorcontrib>Voss, Andreas</creatorcontrib><creatorcontrib>Köthe, Ullrich</creatorcontrib><title>Towards end‐to‐end likelihood‐free inference with convolutional neural networks</title><title>British journal of mathematical &amp; statistical psychology</title><addtitle>Br J Math Stat Psychol</addtitle><description>Complex simulator‐based models with non‐standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end‐to‐end approach for approximate Bayesian computation (ABC) based on fully convolutional neural networks. The method enables users of ABC to derive simultaneously the posterior mean and variance of multidimensional posterior distributions directly from raw simulated data. Once trained on simulated data, the convolutional neural network is able to map real data samples of variable size to the first two posterior moments of the relevant parameter's distributions. Thus, in contrast to other machine learning approaches to ABC, our approach allows us to generate reusable models that can be applied by different researchers employing the same model. We verify the utility of our method on two common statistical models (i.e., a multivariate normal distribution and a multiple regression scenario), for which the posterior parameter distributions can be derived analytically. We then apply our method to recover the parameters of the leaky competing accumulator (LCA) model and we reference our results to the current state‐of‐the‐art technique, which is the probability density estimation (PDA). Results show that our method exhibits a lower approximation error compared with other machine learning approaches to ABC. It also performs similarly to PDA in recovering the parameters of the LCA model.</description><subject>Accumulators</subject><subject>Algorithms</subject><subject>approximate Bayesian computation</subject><subject>Artificial neural networks</subject><subject>Bayes Theorem</subject><subject>Computer Simulation</subject><subject>convolutional network</subject><subject>Humans</subject><subject>Inference</subject><subject>leaky competing accumulator</subject><subject>Likelihood Functions</subject><subject>likelihood‐free inference</subject><subject>Machine Learning</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Normal distribution</subject><subject>Parameters</subject><subject>Regression Analysis</subject><subject>Statistical analysis</subject><subject>Statistical methods</subject><subject>Statistical models</subject><issn>0007-1102</issn><issn>2044-8317</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kM1Kw0AQxxdRbK1efAAJeBEhdT_zcdTiF1QUbM8hm8zStEm27iaW3nwEn9EncdtUDx6cw8ww_Pgx_BE6JXhIXF3Jyi6HhBIR76E-xZz7ESPhPupjjEOfEEx76MjaOcaEChwcoh7DYcxoHPfRdKJXqcmtB3X-9fHZaNfc6pXFAspipvXmqgyAV9QKDNQZeKuimXmZrt912TaFrtPSq6E129GstFnYY3Sg0tLCyW4O0PTudjJ68MfP94-j67GfMRHGvmRZCJhLxTKlaB6pABMZpZIJThUjAQGeMlA5D9y3OZPACVVRJkTKpczDkA3QReddGv3Wgm2SqrAZlGVag25tQkkkBI9jRhx6_ged69a43x3FuKBhQOON8LKjMqOtNaCSpSmq1KwTgpNN2Mkm7GQbtoPPdspWVpD_oj_pOoB0wKooYf2PKrl5en3ppN9GO42s</recordid><startdate>202002</startdate><enddate>202002</enddate><creator>Radev, Stefan T.</creator><creator>Mertens, Ulf K.</creator><creator>Voss, Andreas</creator><creator>Köthe, Ullrich</creator><general>British Psychological Society</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope><scope>K9.</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-6702-9559</orcidid></search><sort><creationdate>202002</creationdate><title>Towards end‐to‐end likelihood‐free inference with convolutional neural networks</title><author>Radev, Stefan T. ; Mertens, Ulf K. ; Voss, Andreas ; Köthe, Ullrich</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3579-b3c7e04bf3cff2d8f601b8ab3542f3161e4a3efd46932d3be412f8c55a4bbd773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accumulators</topic><topic>Algorithms</topic><topic>approximate Bayesian computation</topic><topic>Artificial neural networks</topic><topic>Bayes Theorem</topic><topic>Computer Simulation</topic><topic>convolutional network</topic><topic>Humans</topic><topic>Inference</topic><topic>leaky competing accumulator</topic><topic>Likelihood Functions</topic><topic>likelihood‐free inference</topic><topic>Machine Learning</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Normal distribution</topic><topic>Parameters</topic><topic>Regression Analysis</topic><topic>Statistical analysis</topic><topic>Statistical methods</topic><topic>Statistical models</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Radev, Stefan T.</creatorcontrib><creatorcontrib>Mertens, Ulf K.</creatorcontrib><creatorcontrib>Voss, Andreas</creatorcontrib><creatorcontrib>Köthe, Ullrich</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>MEDLINE - Academic</collection><jtitle>British journal of mathematical &amp; statistical psychology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Radev, Stefan T.</au><au>Mertens, Ulf K.</au><au>Voss, Andreas</au><au>Köthe, Ullrich</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards end‐to‐end likelihood‐free inference with convolutional neural networks</atitle><jtitle>British journal of mathematical &amp; statistical psychology</jtitle><addtitle>Br J Math Stat Psychol</addtitle><date>2020-02</date><risdate>2020</risdate><volume>73</volume><issue>1</issue><spage>23</spage><epage>43</epage><pages>23-43</pages><issn>0007-1102</issn><eissn>2044-8317</eissn><abstract>Complex simulator‐based models with non‐standard sampling distributions require sophisticated design choices for reliable approximate parameter inference. We introduce a fast, end‐to‐end approach for approximate Bayesian computation (ABC) based on fully convolutional neural networks. The method enables users of ABC to derive simultaneously the posterior mean and variance of multidimensional posterior distributions directly from raw simulated data. Once trained on simulated data, the convolutional neural network is able to map real data samples of variable size to the first two posterior moments of the relevant parameter's distributions. Thus, in contrast to other machine learning approaches to ABC, our approach allows us to generate reusable models that can be applied by different researchers employing the same model. We verify the utility of our method on two common statistical models (i.e., a multivariate normal distribution and a multiple regression scenario), for which the posterior parameter distributions can be derived analytically. We then apply our method to recover the parameters of the leaky competing accumulator (LCA) model and we reference our results to the current state‐of‐the‐art technique, which is the probability density estimation (PDA). Results show that our method exhibits a lower approximation error compared with other machine learning approaches to ABC. It also performs similarly to PDA in recovering the parameters of the LCA model.</abstract><cop>England</cop><pub>British Psychological Society</pub><pmid>30793299</pmid><doi>10.1111/bmsp.12159</doi><tpages>21</tpages><orcidid>https://orcid.org/0000-0002-6702-9559</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0007-1102
ispartof British journal of mathematical & statistical psychology, 2020-02, Vol.73 (1), p.23-43
issn 0007-1102
2044-8317
language eng
recordid cdi_proquest_miscellaneous_2185549931
source Wiley Online Library - AutoHoldings Journals; MEDLINE
subjects Accumulators
Algorithms
approximate Bayesian computation
Artificial neural networks
Bayes Theorem
Computer Simulation
convolutional network
Humans
Inference
leaky competing accumulator
Likelihood Functions
likelihood‐free inference
Machine Learning
Neural networks
Neural Networks, Computer
Normal distribution
Parameters
Regression Analysis
Statistical analysis
Statistical methods
Statistical models
title Towards end‐to‐end likelihood‐free inference with convolutional neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T14%3A46%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20end%E2%80%90to%E2%80%90end%20likelihood%E2%80%90free%20inference%20with%20convolutional%20neural%20networks&rft.jtitle=British%20journal%20of%20mathematical%20&%20statistical%20psychology&rft.au=Radev,%20Stefan%20T.&rft.date=2020-02&rft.volume=73&rft.issue=1&rft.spage=23&rft.epage=43&rft.pages=23-43&rft.issn=0007-1102&rft.eissn=2044-8317&rft_id=info:doi/10.1111/bmsp.12159&rft_dat=%3Cproquest_cross%3E2185549931%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2345276297&rft_id=info:pmid/30793299&rfr_iscdi=true