Complexity of linearized augmented Lagrangian for optimization with nonlinear equality constraints
In this paper, we consider a nonconvex optimization problem with nonlinear equality constraints. We assume that both, the objective function and the functional constraints are locally smooth. For solving this problem, we propose a linearized augmented Lagrangian method, i.e., we linearize the functi...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we consider a nonconvex optimization problem with nonlinear
equality constraints. We assume that both, the objective function and the
functional constraints are locally smooth. For solving this problem, we propose
a linearized augmented Lagrangian method, i.e., we linearize the functional
constraints in the augmented Lagrangian at the current iterate and add a
quadratic regularization, yielding a subproblem that is easy to solve, and
whose solution is the next iterate. Under a dynamic regularization parameter
choice, we prove global asymptotic convergence of the iterates to a critical
point of the problem. We also derive convergence guarantees for the iterates of
our method to an $\epsilon$ first-order optimal solution in
$\mathcal{O}(1/{\epsilon^2})$ outer iterations. Finally, we show that, when the
problem data are e.g., semialgebraic, the sequence generated by our algorithm
converges and we derive convergence rates. We validate the theory and the
performance of the proposed algorithm by numerically comparing it with the
existing methods from the literature. |
---|---|
DOI: | 10.48550/arxiv.2301.08345 |