Characterizing the implicit bias via a primal-dual analysis
This paper shows that the implicit bias of gradient descent on linearly separable data is exactly characterized by the optimal solution of a dual optimization problem given by a smoothed margin, even for general losses. This is in contrast to prior results, which are often tailored to exponentially-...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper shows that the implicit bias of gradient descent on linearly
separable data is exactly characterized by the optimal solution of a dual
optimization problem given by a smoothed margin, even for general losses. This
is in contrast to prior results, which are often tailored to
exponentially-tailed losses. For the exponential loss specifically, with $n$
training examples and $t$ gradient descent steps, our dual analysis further
allows us to prove an $O(\ln(n)/\ln(t))$ convergence rate to the $\ell_2$
maximum margin direction, when a constant step size is used. This rate is tight
in both $n$ and $t$, which has not been presented by prior work. On the other
hand, with a properly chosen but aggressive step size schedule, we prove
$O(1/t)$ rates for both $\ell_2$ margin maximization and implicit bias, whereas
prior work (including all first-order methods for the general hard-margin
linear SVM problem) proved $\widetilde{O}(1/\sqrt{t})$ margin rates, or
$O(1/t)$ margin rates to a suboptimal margin, with an implied (slower) bias
rate. Our key observations include that gradient descent on the primal variable
naturally induces a mirror descent update on the dual variable, and that the
dual objective in this setting is smooth enough to give a faster rate. |
---|---|
DOI: | 10.48550/arxiv.1906.04540 |