On the Double Descent of Random Features Models Trained with SGD
We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under both constant and polynomial-decay step-size...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study generalization properties of random features (RF) regression in high
dimensions optimized by stochastic gradient descent (SGD) in
under-/over-parameterized regime. In this work, we derive precise
non-asymptotic error bounds of RF regression under both constant and
polynomial-decay step-size SGD setting, and observe the double descent
phenomenon both theoretically and empirically. Our analysis shows how to cope
with multiple randomness sources of initialization, label noise, and data
sampling (as well as stochastic gradients) with no closed-form solution, and
also goes beyond the commonly-used Gaussian/spherical data assumption. Our
theoretical results demonstrate that, with SGD training, RF regression still
generalizes well for interpolation learning, and is able to characterize the
double descent behavior by the unimodality of variance and monotonic decrease
of bias. Besides, we also prove that the constant step-size SGD setting incurs
no loss in convergence rate when compared to the exact minimum-norm
interpolator, as a theoretical justification of using SGD in practice. |
---|---|
DOI: | 10.48550/arxiv.2110.06910 |