The Implicit Bias of Gradient Descent on Separable Data
We examine gradient descent on unregularized logistic regression problems, with homogeneous linear predictors on linearly separable datasets. We show the predictor converges to the direction of the max-margin (hard margin SVM) solution. The result also generalizes to other monotone decreasing loss f...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We examine gradient descent on unregularized logistic regression problems,
with homogeneous linear predictors on linearly separable datasets. We show the
predictor converges to the direction of the max-margin (hard margin SVM)
solution. The result also generalizes to other monotone decreasing loss
functions with an infimum at infinity, to multi-class problems, and to training
a weight layer in a deep network in a certain restricted setting. Furthermore,
we show this convergence is very slow, and only logarithmic in the convergence
of the loss itself. This can help explain the benefit of continuing to optimize
the logistic or cross-entropy loss even after the training error is zero and
the training loss is extremely small, and, as we show, even if the validation
loss increases. Our methodology can also aid in understanding implicit
regularization n more complex models and with other optimization methods. |
---|---|
DOI: | 10.48550/arxiv.1710.10345 |