On the optimal linear convergence factor of the relaxed proximal point algorithm for monotone inclusion problems

Finding a zero of a maximal monotone operator is fundamental in convex optimization and monotone operator theory, and \emph{proximal point algorithm} (PPA) is a primary method for solving this problem. PPA converges not only globally under fairly mild conditions but also asymptotically at a fast lin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gu, Guoyong, Yang, Junfeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Finding a zero of a maximal monotone operator is fundamental in convex optimization and monotone operator theory, and \emph{proximal point algorithm} (PPA) is a primary method for solving this problem. PPA converges not only globally under fairly mild conditions but also asymptotically at a fast linear rate provided that the underlying inverse operator is Lipschitz continuous at the origin. These nice convergence properties are preserved by a relaxed variant of PPA. Recently, a linear convergence bound was established in [M. Tao, and X. M. Yuan, J. Sci. Comput., 74 (2018), pp. 826-850] for the relaxed PPA, and it was shown that the bound is optimal when the relaxation factor $\gamma$ lies in $[1,2)$. However, for other choices of $\gamma$, the bound obtained by Tao and Yuan is suboptimal. In this paper, we establish tight linear convergence bounds for any choice of $\gamma\in(0,2)$ and make the whole picture about optimal linear convergence bounds clear. These results sharpen our understandings to the asymptotic behavior of the relaxed PPA.
DOI:10.48550/arxiv.1905.04537