An acceleration procedure for optimal first-order methods
We introduce in this paper an optimal first-order method that allows an easy and cheap evaluation of the local Lipschitz constant of the objective's gradient. This constant must ideally be chosen at every iteration as small as possible, while serving in an indispensable upper bound for the valu...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce in this paper an optimal first-order method that allows an easy
and cheap evaluation of the local Lipschitz constant of the objective's
gradient. This constant must ideally be chosen at every iteration as small as
possible, while serving in an indispensable upper bound for the value of the
objective function. In the previously existing variants of optimal first-order
methods, this upper bound inequality was constructed from points computed
during the current iteration. It was thus not possible to select the optimal
value for this Lipschitz constant at the beginning of the iteration.
In our variant, the upper bound inequality is constructed from points
available before the current iteration, offering us the possibility to set the
Lipschitz constant to its optimal value at once. This procedure, even if
efficient in practice, presents a higher worse-case complexity than standard
optimal first-order methods. We propose an alternative strategy that retains
the practical efficiency of this procedure, while having an optimal worse-case
complexity. We show how our generic scheme can be adapted for smoothing
techniques, and perform numerical experiments on large scale eigenvalue
minimization problems. As compared with standard optimal first-order methods,
our schemes allows us to divide computation times by two to three orders of
magnitude for the largest problems we considered. |
---|---|
DOI: | 10.48550/arxiv.1207.3951 |