Tight Risk Bounds for Gradient Descent on Separable Data
We study the generalization properties of unregularized gradient methods applied to separable linear classification -- a setting that has received considerable attention since the pioneering work of Soudry et al. (2018). We establish tight upper and lower (population) risk bounds for gradient descen...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the generalization properties of unregularized gradient methods
applied to separable linear classification -- a setting that has received
considerable attention since the pioneering work of Soudry et al. (2018). We
establish tight upper and lower (population) risk bounds for gradient descent
in this setting, for any smooth loss function, expressed in terms of its tail
decay rate. Our bounds take the form $\Theta(r_{\ell,T}^2 / \gamma^2 T +
r_{\ell,T}^2 / \gamma^2 n)$, where $T$ is the number of gradient steps, $n$ is
size of the training set, $\gamma$ is the data margin, and $r_{\ell,T}$ is a
complexity term that depends on the (tail decay rate) of the loss function (and
on $T$). Our upper bound matches the best known upper bounds due to Shamir
(2021); Schliserman and Koren (2022), while extending their applicability to
virtually any smooth loss function and relaxing technical assumptions they
impose. Our risk lower bounds are the first in this context and establish the
tightness of our upper bounds for any given tail decay rate and in all
parameter regimes. The proof technique used to show these results is also
markedly simpler compared to previous work, and is straightforward to extend to
other gradient methods; we illustrate this by providing analogous results for
Stochastic Gradient Descent. |
---|---|
DOI: | 10.48550/arxiv.2303.01135 |