Tight Ergodic Sublinear Convergence Rate of the Relaxed Proximal Point Algorithm for Monotone Variational Inequalities
This paper considers the relaxed proximal point algorithm for solving monotone variational inequality problems, and our main contribution is the establishment of a tight ergodic sublinear convergence rate. First, the tight or exact worst-case convergence rate is computed using the performance estima...
Gespeichert in:
Veröffentlicht in: | Journal of optimization theory and applications 2024-07, Vol.202 (1), p.373-387 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper considers the relaxed proximal point algorithm for solving monotone variational inequality problems, and our main contribution is the establishment of a tight ergodic sublinear convergence rate. First, the tight or exact worst-case convergence rate is computed using the performance estimation framework. It is observed that this numerical bound asymptotically coincides with the best-known existing rate, whose tightness is not clear. This implies that, without further assumptions, sublinear convergence rate is likely the best achievable rate for the relaxed proximal point algorithm. Motivated by the numerical result, a concrete example is constructed, which provides a lower bound on the exact worst-case convergence rate. This lower bound coincides with the numerical bound computed via the performance estimation framework, leading us to conjecture that the lower bound provided by the example is exactly the tight worse-case rate, which is then verified theoretically. We thus have established an ergodic sublinear complexity rate that is tight in terms of both the sublinear order and the constants involved. |
---|---|
ISSN: | 0022-3239 1573-2878 |
DOI: | 10.1007/s10957-022-02058-3 |