Convergence rates of vector-valued local polynomial regression
Non-parametric estimation of functions as well as their derivatives by means of local-polynomial regression is a subject that was studied in the literature since the late 1970's. Given a set of noisy samples of a $\mathcal{C}^k$ smooth function, we perform a local polynomial fit, and by taking...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Non-parametric estimation of functions as well as their derivatives by means
of local-polynomial regression is a subject that was studied in the literature
since the late 1970's. Given a set of noisy samples of a $\mathcal{C}^k$ smooth
function, we perform a local polynomial fit, and by taking its $m$-th
derivative we obtain an estimate for the $m$-th function derivative. The known
optimal rates of convergence for this problem for a $k$-times smooth function
$f:\mathbb{R}^d \to \mathbb{R}$ are $n^{-\frac{k-m}{2k + d}}$. However in
modern applications it is often the case that we have to estimate a function
operating to $\mathbb{R}^D$, for $D \gg d$ extremely large. In this work, we
prove that these same rates of convergence are also achievable by
local-polynomial regression in case of a high dimensional target, given some
assumptions on the noise distribution. This result is an extension to Stone's
seminal work from 1980 to the regime of high-dimensional target domain. In
addition, we unveil a connection between the failure probability $\varepsilon$
and the number of samples required to achieve the optimal rates. |
---|---|
DOI: | 10.48550/arxiv.2107.05852 |