On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case
We study the problem of sampling from a probability distribution pi on R-d which has a density w.r.t. the Lebesgue measure known up to a normalization factor x bar right arrow e(-U(x)) / f(R)d e (-U(y)) dy.. We analyze a sampling method based on the Euler discretization of the Langevin stochastic di...
Gespeichert in:
Veröffentlicht in: | Bernoulli : official journal of the Bernoulli Society for Mathematical Statistics and Probability 2021-02, Vol.27 (1), p.1-33 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the problem of sampling from a probability distribution pi on R-d which has a density w.r.t. the Lebesgue measure known up to a normalization factor x bar right arrow e(-U(x)) / f(R)d e (-U(y)) dy.. We analyze a sampling method based on the Euler discretization of the Langevin stochastic differential equations under the assumptions that the potential U is continuously differentiable, del U is Lipschitz, and U is strongly concave. We focus on the case where the gradient of the log-density cannot be directly computed but unbiased estimates of the gradient from possibly dependent observations are available. This setting can be seen as a combination of a stochastic approximation (here stochastic gradient) type algorithms with discretized Langevin dynamics. We obtain an upper bound of the Wasserstein-2 distance between the law of the iterates of this algorithm and the target distribution pi with constants depending explicitly on the Lipschitz and strong convexity constants of the potential and the dimension of the space. Finally, under weaker assumptions on U and its gradient but in the presence of independent observations, we obtain analogous results in Wasserstein-2 distance. |
---|---|
ISSN: | 1350-7265 1573-9759 |
DOI: | 10.3150/19-BEJ1187 |