Compositional Score Modeling for Simulation-based Inference

Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations. In contrast, Neural Likelihood...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Geffner, Tomas, Papamakarios, George, Mnih, Andriy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Geffner, Tomas
Papamakarios, George
Mnih, Andriy
description Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations. In contrast, Neural Likelihood Estimation methods can handle multiple observations at inference time after learning from individual observations, but they rely on standard inference methods, such as MCMC or variational inference, which come with certain performance drawbacks. We introduce a new method based on conditional score modeling that enjoys the benefits of both approaches. We model the scores of the (diffused) posterior distributions induced by individual observations, and introduce a way of combining the learned scores to approximately sample from the target posterior distribution. Our approach is sample-efficient, can naturally aggregate multiple observations at inference time, and avoids the drawbacks of standard inference methods.
doi_str_mv 10.48550/arxiv.2209.14249
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2209_14249</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2209_14249</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-f9aa6340838aae34ea8b51d7f5358d3f5ac575a5a22139e50332a0a3354daded3</originalsourceid><addsrcrecordid>eNotj8tqwzAQRbXpoqT9gK6qH7ArazSxRFfF9BFIyCLZm4k1KgLbCnIbkr8PeazO4sLhHiFeKlUai6jeKB_jodRaubIy2rhH8d6kYZ-m-BfTSL3cdCmzXCXPfRx_ZUhZbuLw39NlL3Y0sZeLMXDmseMn8RCon_j5zpnYfn1um59iuf5eNB_Lgua1K4IjmoNRFiwRg2GyO6x8HRDQeghIHdZISFpX4BgVgCZFAGg8efYwE6837fV9u89xoHxqLxXttQLOsQhCOQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Compositional Score Modeling for Simulation-based Inference</title><source>arXiv.org</source><creator>Geffner, Tomas ; Papamakarios, George ; Mnih, Andriy</creator><creatorcontrib>Geffner, Tomas ; Papamakarios, George ; Mnih, Andriy</creatorcontrib><description>Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations. In contrast, Neural Likelihood Estimation methods can handle multiple observations at inference time after learning from individual observations, but they rely on standard inference methods, such as MCMC or variational inference, which come with certain performance drawbacks. We introduce a new method based on conditional score modeling that enjoys the benefits of both approaches. We model the scores of the (diffused) posterior distributions induced by individual observations, and introduce a way of combining the learned scores to approximately sample from the target posterior distribution. Our approach is sample-efficient, can naturally aggregate multiple observations at inference time, and avoids the drawbacks of standard inference methods.</description><identifier>DOI: 10.48550/arxiv.2209.14249</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2022-09</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2209.14249$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2209.14249$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Geffner, Tomas</creatorcontrib><creatorcontrib>Papamakarios, George</creatorcontrib><creatorcontrib>Mnih, Andriy</creatorcontrib><title>Compositional Score Modeling for Simulation-based Inference</title><description>Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations. In contrast, Neural Likelihood Estimation methods can handle multiple observations at inference time after learning from individual observations, but they rely on standard inference methods, such as MCMC or variational inference, which come with certain performance drawbacks. We introduce a new method based on conditional score modeling that enjoys the benefits of both approaches. We model the scores of the (diffused) posterior distributions induced by individual observations, and introduce a way of combining the learned scores to approximately sample from the target posterior distribution. Our approach is sample-efficient, can naturally aggregate multiple observations at inference time, and avoids the drawbacks of standard inference methods.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAQRbXpoqT9gK6qH7ArazSxRFfF9BFIyCLZm4k1KgLbCnIbkr8PeazO4sLhHiFeKlUai6jeKB_jodRaubIy2rhH8d6kYZ-m-BfTSL3cdCmzXCXPfRx_ZUhZbuLw39NlL3Y0sZeLMXDmseMn8RCon_j5zpnYfn1um59iuf5eNB_Lgua1K4IjmoNRFiwRg2GyO6x8HRDQeghIHdZISFpX4BgVgCZFAGg8efYwE6837fV9u89xoHxqLxXttQLOsQhCOQ</recordid><startdate>20220928</startdate><enddate>20220928</enddate><creator>Geffner, Tomas</creator><creator>Papamakarios, George</creator><creator>Mnih, Andriy</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20220928</creationdate><title>Compositional Score Modeling for Simulation-based Inference</title><author>Geffner, Tomas ; Papamakarios, George ; Mnih, Andriy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-f9aa6340838aae34ea8b51d7f5358d3f5ac575a5a22139e50332a0a3354daded3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Geffner, Tomas</creatorcontrib><creatorcontrib>Papamakarios, George</creatorcontrib><creatorcontrib>Mnih, Andriy</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Geffner, Tomas</au><au>Papamakarios, George</au><au>Mnih, Andriy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Compositional Score Modeling for Simulation-based Inference</atitle><date>2022-09-28</date><risdate>2022</risdate><abstract>Neural Posterior Estimation methods for simulation-based inference can be ill-suited for dealing with posterior distributions obtained by conditioning on multiple observations, as they tend to require a large number of simulator calls to learn accurate approximations. In contrast, Neural Likelihood Estimation methods can handle multiple observations at inference time after learning from individual observations, but they rely on standard inference methods, such as MCMC or variational inference, which come with certain performance drawbacks. We introduce a new method based on conditional score modeling that enjoys the benefits of both approaches. We model the scores of the (diffused) posterior distributions induced by individual observations, and introduce a way of combining the learned scores to approximately sample from the target posterior distribution. Our approach is sample-efficient, can naturally aggregate multiple observations at inference time, and avoids the drawbacks of standard inference methods.</abstract><doi>10.48550/arxiv.2209.14249</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2209.14249
ispartof
issn
language eng
recordid cdi_arxiv_primary_2209_14249
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Compositional Score Modeling for Simulation-based Inference
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-19T06%3A04%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Compositional%20Score%20Modeling%20for%20Simulation-based%20Inference&rft.au=Geffner,%20Tomas&rft.date=2022-09-28&rft_id=info:doi/10.48550/arxiv.2209.14249&rft_dat=%3Carxiv_GOX%3E2209_14249%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true