Distilling Importance Sampling for Likelihood Free Inference
Likelihood-free inference involves inferring parameter values given observed data and a simulator model. The simulator is computer code which takes parameters, performs stochastic calculations, and outputs simulated data. In this work, we view the simulator as a function whose inputs are (1) the par...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Likelihood-free inference involves inferring parameter values given observed
data and a simulator model. The simulator is computer code which takes
parameters, performs stochastic calculations, and outputs simulated data. In
this work, we view the simulator as a function whose inputs are (1) the
parameters and (2) a vector of pseudo-random draws. We attempt to infer all
these inputs conditional on the observations. This is challenging as the
resulting posterior can be high dimensional and involve strong dependence. We
approximate the posterior using normalizing flows, a flexible parametric family
of densities. Training data is generated by likelihood-free importance sampling
with a large bandwidth value epsilon, which makes the target similar to the
prior. The training data is "distilled" by using it to train an updated
normalizing flow. The process is iterated, using the updated flow as the
importance sampling proposal, and slowly reducing epsilon so the target becomes
closer to the posterior. Unlike most other likelihood-free methods, we avoid
the need to reduce data to low dimensional summary statistics, and hence can
achieve more accurate results. We illustrate our method in two challenging
examples, on queuing and epidemiology. |
---|---|
DOI: | 10.48550/arxiv.1910.03632 |