Optimally-Weighted Estimators of the Maximum Mean Discrepancy for Likelihood-Free Inference
Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Likelihood-free inference methods typically make use of a distance between
simulated and real data. A common example is the maximum mean discrepancy
(MMD), which has previously been used for approximate Bayesian computation,
minimum distance estimation, generalised Bayesian inference, and within the
nonparametric learning framework. The MMD is commonly estimated at a root-$m$
rate, where $m$ is the number of simulated samples. This can lead to
significant computational challenges since a large $m$ is required to obtain an
accurate estimate, which is crucial for parameter estimation. In this paper, we
propose a novel estimator for the MMD with significantly improved sample
complexity. The estimator is particularly well suited for computationally
expensive smooth simulators with low- to mid-dimensional inputs. This claim is
supported through both theoretical results and an extensive simulation study on
benchmark simulators. |
---|---|
DOI: | 10.48550/arxiv.2301.11674 |