Estimate Sequences for Variance-Reduced Stochastic Composite Optimization
International Conference on Machine Learning (ICML), Jun 2019, Long Beach, United States In this paper, we propose a unified view of gradient-based algorithms for stochastic convex composite optimization by extending the concept of estimate sequence introduced by Nesterov. This point of view covers...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | International Conference on Machine Learning (ICML), Jun 2019,
Long Beach, United States In this paper, we propose a unified view of gradient-based algorithms for
stochastic convex composite optimization by extending the concept of estimate
sequence introduced by Nesterov. This point of view covers the stochastic
gradient descent method, variants of the approaches SAGA, SVRG, and has several
advantages: (i) we provide a generic proof of convergence for the
aforementioned methods; (ii) we show that this SVRG variant is adaptive to
strong convexity; (iii) we naturally obtain new algorithms with the same
guarantees; (iv) we derive generic strategies to make these algorithms robust
to stochastic noise, which is useful when data is corrupted by small random
perturbations. Finally, we show that this viewpoint is useful to obtain new
accelerated algorithms in the sense of Nesterov. |
---|---|
DOI: | 10.48550/arxiv.1905.02374 |