Convergence of Unadjusted Langevin in High Dimensions: Delocalization of Bias
The unadjusted Langevin algorithm is commonly used to sample probability distributions in extremely high-dimensional settings. However, existing analyses of the algorithm for strongly log-concave distributions suggest that, as the dimension $d$ of the problem increases, the number of iterations requ...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The unadjusted Langevin algorithm is commonly used to sample probability
distributions in extremely high-dimensional settings. However, existing
analyses of the algorithm for strongly log-concave distributions suggest that,
as the dimension $d$ of the problem increases, the number of iterations
required to ensure convergence within a desired error in the $W_2$ metric
scales in proportion to $d$ or $\sqrt{d}$. In this paper, we argue that,
despite this poor scaling of the $W_2$ error for the full set of variables, the
behavior for a small number of variables can be significantly better: a number
of iterations proportional to $K$, up to logarithmic terms in $d$, often
suffices for the algorithm to converge to within a desired $W_2$ error for all
$K$-marginals. We refer to this effect as delocalization of bias. We show that
the delocalization effect does not hold universally and prove its validity for
Gaussian distributions and strongly log-concave distributions with certain
sparse interactions. Our analysis relies on a novel $W_{2,\ell^\infty}$ metric
to measure convergence. A key technical challenge we address is the lack of a
one-step contraction property in this metric. Finally, we use asymptotic
arguments to explore potential generalizations of the delocalization effect
beyond the Gaussian and sparse interactions setting. |
---|---|
DOI: | 10.48550/arxiv.2408.13115 |