Characterizing Implicit Bias in Terms of Optimization Geometry
We study the implicit bias of generic optimization methods, such as mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We explore the question of...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the implicit bias of generic optimization methods, such as mirror
descent, natural gradient descent, and steepest descent with respect to
different potentials and norms, when optimizing underdetermined linear
regression or separable linear classification problems. We explore the question
of whether the specific global minimum (among the many possible global minima)
reached by an algorithm can be characterized in terms of the potential or norm
of the optimization geometry, and independently of hyperparameter choices such
as step-size and momentum. |
---|---|
DOI: | 10.48550/arxiv.1802.08246 |