Global Convergence Analysis of the Power Proximal Point and Augmented Lagrangian Method
In this paper we study an unconventional inexact Augmented Lagrangian Method (ALM) for convex optimization problems, as first proposed by Bertsekas, herein the penalty term is a potentially non-Euclidean norm raised to a power between one and two. We analyze the algorithm through the lens of a nonli...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper we study an unconventional inexact Augmented Lagrangian Method
(ALM) for convex optimization problems, as first proposed by Bertsekas, herein
the penalty term is a potentially non-Euclidean norm raised to a power between
one and two. We analyze the algorithm through the lens of a nonlinear Proximal
Point Method (PPM), as originally introduced by Luque, applied to the dual
problem. While Luque analyzes the order of local convergence of the scheme with
Euclidean norms our focus is on the non-Euclidean case which prevents us from
using standard tools for the analysis such as the nonexpansiveness of the
proximal mapping. To allow for errors in the primal update, we derive two
implementable stopping criteria under which we analyze both the global and the
local convergence rates of the algorithm. More specifically, we show that the
method enjoys a fast sublinear global rate in general and a local superlinear
rate under suitable growth assumptions. We also highlight that the power ALM
can be interpreted as classical ALM with an implicitly defined
penalty-parameter schedule, reducing its parameter dependence. Our experiments
on a number of relevant problems suggest that for certain powers the method
performs similarly to a classical ALM with fine-tuned adaptive penalty rule,
despite involving fewer parameters. |
---|---|
DOI: | 10.48550/arxiv.2312.12205 |