Learning Rate Schedules in the Presence of Distribution Shift
Proceedings of the 40th International Conference on Machine Learning (ICML 2023) 9523-9546 We design learning rate schedules that minimize regret for SGD-based online learning in the presence of a changing data distribution. We fully characterize the optimal learning rate schedule for online linear...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Proceedings of the 40th International Conference on Machine
Learning (ICML 2023) 9523-9546 We design learning rate schedules that minimize regret for SGD-based online
learning in the presence of a changing data distribution. We fully characterize
the optimal learning rate schedule for online linear regression via a novel
analysis with stochastic differential equations. For general convex loss
functions, we propose new learning rate schedules that are robust to
distribution shift and we give upper and lower bounds for the regret that only
differ by constants. For non-convex loss functions, we define a notion of
regret based on the gradient norm of the estimated models and propose a
learning schedule that minimizes an upper bound on the total expected regret.
Intuitively, one expects changing loss landscapes to require more exploration,
and we confirm that optimal learning rate schedules typically increase in the
presence of distribution shift. Finally, we provide experiments for
high-dimensional regression models and neural networks to illustrate these
learning rate schedules and their cumulative regret. |
---|---|
DOI: | 10.48550/arxiv.2303.15634 |