The Polynomial Stein Discrepancy for Assessing Moment Convergence
We propose a novel method for measuring the discrepancy between a set of samples and a desired posterior distribution for Bayesian inference. Classical methods for assessing sample quality like the effective sample size are not appropriate for scalable Bayesian sampling algorithms, such as stochasti...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose a novel method for measuring the discrepancy between a set of
samples and a desired posterior distribution for Bayesian inference. Classical
methods for assessing sample quality like the effective sample size are not
appropriate for scalable Bayesian sampling algorithms, such as stochastic
gradient Langevin dynamics, that are asymptotically biased. Instead, the gold
standard is to use the kernel Stein Discrepancy (KSD), which is itself not
scalable given its quadratic cost in the number of samples. The KSD and its
faster extensions also typically suffer from the curse-of-dimensionality and
can require extensive tuning. To address these limitations, we develop the
polynomial Stein discrepancy (PSD) and an associated goodness-of-fit test.
While the new test is not fully convergence-determining, we prove that it
detects differences in the first r moments in the Bernstein-von Mises limit. We
empirically show that the test has higher power than its competitors in several
examples, and at a lower computational cost. Finally, we demonstrate that the
PSD can assist practitioners to select hyper-parameters of Bayesian sampling
algorithms more efficiently than competitors. |
---|---|
DOI: | 10.48550/arxiv.2412.05135 |