On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case
We study the problem of sampling from a probability distribution $\pi$ on $\rset^d$ which has a density \wrt\ the Lebesgue measure known up to a normalization factor $x \mapsto \rme^{-U(x)} / \int_{\rset^d} \rme^{-U(y)} \rmd y$. We analyze a sampling method based on the Euler discretization of the L...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the problem of sampling from a probability distribution $\pi$ on
$\rset^d$ which has a density \wrt\ the Lebesgue measure known up to a
normalization factor $x \mapsto \rme^{-U(x)} / \int_{\rset^d} \rme^{-U(y)} \rmd
y$. We analyze a sampling method based on the Euler discretization of the
Langevin stochastic differential equations under the assumptions that the
potential $U$ is continuously differentiable, $\nabla U$ is Lipschitz, and $U$
is strongly concave. We focus on the case where the gradient of the log-density
cannot be directly computed but unbiased estimates of the gradient from
possibly dependent observations are available. This setting can be seen as a
combination of a stochastic approximation (here stochastic gradient) type
algorithms with discretized Langevin dynamics. We obtain an upper bound of the
Wasserstein-2 distance between the law of the iterates of this algorithm and
the target distribution $\pi$ with constants depending explicitly on the
Lipschitz and strong convexity constants of the potential and the dimension of
the space. Finally, under weaker assumptions on $U$ and its gradient but in the
presence of independent observations, we obtain analogous results in
Wasserstein-2 distance. |
---|---|
DOI: | 10.48550/arxiv.1812.02709 |