Convergence Rate of Inertial Proximal Algorithms with General Extrapolation and Proximal Coefficients

In a Hilbert space setting H , in order to minimize by fast methods a general convex lower semicontinuous and proper function Φ : H → ℝ ∪ { + ∞ } , we analyze the convergence rate of the inertial proximal algorithms. These algorithms involve both extrapolation coefficients (including Nesterov accele...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Vietnam journal of mathematics 2020-06, Vol.48 (2), p.247-276
Hauptverfasser: Attouch, Hedy, Chbani, Zaki, Riahi, Hassan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In a Hilbert space setting H , in order to minimize by fast methods a general convex lower semicontinuous and proper function Φ : H → ℝ ∪ { + ∞ } , we analyze the convergence rate of the inertial proximal algorithms. These algorithms involve both extrapolation coefficients (including Nesterov acceleration method) and proximal coefficients in a general form. They can be interpreted as the discrete time version of inertial continuous gradient systems with general damping and time scale coefficients. Based on the proper setting of these parameters, we show the fast convergence of values and the convergence of iterates. In doing so, we provide an overview of this class of algorithms. Our study complements the previous Attouch–Cabot paper (SIOPT, 2018) by introducing into the algorithm time scaling aspects, and sheds new light on the Güler seminal papers on the convergence rate of the accelerated proximal methods for convex optimization.
ISSN:2305-221X
2305-2228
DOI:10.1007/s10013-020-00399-y