Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates

A general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed by Scheinberg and Tang (Math Program 160:495–529, 2016 ) and a sublinear global convergence rate has been established. In this paper, we analyze the global convergence rate of this met...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computational optimization and applications 2018-04, Vol.69 (3), p.597-627
Hauptverfasser: Ghanbari, Hiva, Scheinberg, Katya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed by Scheinberg and Tang (Math Program 160:495–529, 2016 ) and a sublinear global convergence rate has been established. In this paper, we analyze the global convergence rate of this method, in the both exact and inexact settings, in the case when the objective function is strongly convex. We also investigate a practical variant of this method by establishing a simple stopping criterion for the subproblem optimization. Furthermore, we consider an accelerated variant, based on FISTA of Beck and Teboulle (SIAM 2:183–202, 2009 ), to the proximal quasi-Newton algorithm. Jiang et al. (SIAM 22:1042–1064, 2012 ) considered a similar accelerated method, where the convergence rate analysis relies on very strong impractical assumptions on Hessian estimates. We present a modified analysis while relaxing these assumptions and perform a numerical comparison of the accelerated proximal quasi-Newton algorithm and the regular one. Our analysis and computational results show that acceleration may not bring any benefit in the quasi-Newton setting.
ISSN:0926-6003
1573-2894
DOI:10.1007/s10589-017-9964-z