High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize
In this paper, we propose a new, simplified high probability analysis of AdaGrad for smooth, non-convex problems. More specifically, we focus on a particular accelerated gradient (AGD) template (Lan, 2020), through which we recover the original AdaGrad and its variant with averaging, and prove a con...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-04 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we propose a new, simplified high probability analysis of AdaGrad for smooth, non-convex problems. More specifically, we focus on a particular accelerated gradient (AGD) template (Lan, 2020), through which we recover the original AdaGrad and its variant with averaging, and prove a convergence rate of \(\mathcal O (1/ \sqrt{T})\) with high probability without the knowledge of smoothness and variance. We use a particular version of Freedman's concentration bound for martingale difference sequences (Kakade & Tewari, 2008) which enables us to achieve the best-known dependence of \(\log (1 / \delta )\) on the probability margin \(\delta\). We present our analysis in a modular way and obtain a complementary \(\mathcal O (1 / T)\) convergence rate in the deterministic setting. To the best of our knowledge, this is the first high probability result for AdaGrad with a truly adaptive scheme, i.e., completely oblivious to the knowledge of smoothness and uniform variance bound, which simultaneously has best-known dependence of \(\log( 1/ \delta)\). We further prove noise adaptation property of AdaGrad under additional noise assumptions. |
---|---|
ISSN: | 2331-8422 |