Convergence rates of non-stationary and deep Gaussian process regression
The focus of this work is the convergence of non-stationary and deep Gaussian process regression. More precisely, we follow a Bayesian approach to regression or interpolation, where the prior placed on the unknown function $f$ is a non-stationary or deep Gaussian process, and we derive convergence r...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The focus of this work is the convergence of non-stationary and deep Gaussian
process regression. More precisely, we follow a Bayesian approach to regression
or interpolation, where the prior placed on the unknown function $f$ is a
non-stationary or deep Gaussian process, and we derive convergence rates of the
posterior mean to the true function $f$ in terms of the number of observed
training points. In some cases, we also show convergence of the posterior
variance to zero. The only assumption imposed on the function $f$ is that it is
an element of a certain reproducing kernel Hilbert space, which we in
particular cases show to be norm-equivalent to a Sobolev space. Our analysis
includes the case of estimated hyper-parameters in the covariance kernels
employed, both in an empirical Bayes' setting and the particular hierarchical
setting constructed through deep Gaussian processes. We consider the settings
of noise-free or noisy observations on deterministic or random training points.
We establish general assumptions sufficient for the convergence of deep
Gaussian process regression, along with explicit examples demonstrating the
fulfilment of these assumptions. Specifically, our examples require that the
H\"older or Sobolev norms of the penultimate layer are bounded almost surely. |
---|---|
DOI: | 10.48550/arxiv.2312.07320 |