A Gradient Complexity Analysis for Minimizing the Sum of Strongly Convex Functions with Varying Condition Numbers
A popular approach to minimize a finite-sum of convex functions is stochastic gradient descent (SGD) and its variants. Fundamental research questions associated with SGD include: (i) To find a lower bound on the number of times that the gradient oracle of each individual function must be assessed in...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A popular approach to minimize a finite-sum of convex functions is stochastic
gradient descent (SGD) and its variants. Fundamental research questions
associated with SGD include: (i) To find a lower bound on the number of times
that the gradient oracle of each individual function must be assessed in order
to find an $\epsilon$-minimizer of the overall objective; (ii) To design
algorithms which guarantee to find an $\epsilon$-minimizer of the overall
objective in expectation at no more than a certain number of times (in terms of
$1/\epsilon$) that the gradient oracle of each functions needs to be assessed
(i.e., upper bound). If these two bounds are at the same order of magnitude,
then the algorithms may be called optimal. Most existing results along this
line of research typically assume that the functions in the objective share the
same condition number. In this paper, the first model we study is the problem
of minimizing the sum of finitely many strongly convex functions whose
condition numbers are all different. We propose an SGD method for this model
and show that it is optimal in gradient computations, up to a logarithmic
factor. We then consider a constrained separate block optimization model, and
present lower and upper bounds for its gradient computation complexity. Next,
we propose to solve the Fenchel dual of the constrained block optimization
model via the SGD we introduced earlier, and show that it yields a lower
iteration complexity than solving the original model by the ADMM-type approach.
Finally, we extend the analysis to the general composite convex optimization
model, and obtain gradient-computation complexity results under certain
conditions. |
---|---|
DOI: | 10.48550/arxiv.2208.06524 |