A First-Order Primal-Dual Method for Nonconvex Constrained Optimization Based on the Augmented Lagrangian
Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics, and data analytics. In this paper, based on the augmented Lagrangian function, we introduce a flexible first-order primal-dual method, to be called nonconvex auxil...
Gespeichert in:
Veröffentlicht in: | Mathematics of operations research 2024-02, Vol.49 (1), p.125-150 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics, and data analytics. In this paper, based on the augmented Lagrangian function, we introduce a flexible first-order primal-dual method, to be called
nonconvex auxiliary problem principle of augmented Lagrangian
(NAPP-AL), for solving a class of nonlinearly constrained nonconvex and nonsmooth optimization problems. We demonstrate that NAPP-AL converges to a stationary solution at the rate of
o
(
1
/
k
)
, where
k
is the number of iterations. Moreover, under an additional error bound condition (to be called HVP-EB in the paper) with exponent
θ
∈
(
0
,
1
)
, we further show the global convergence of NAPP-AL. Additionally, if
θ
∈
(
0
,
1
2
]
, then we furthermore show that the convergence rate is in fact linear. Finally, we show that the well-known Kurdyka-Łojasiewicz property and the Hölderian metric subregularity imply the aforementioned HVP-EB condition. We demonstrate that under mild conditions, NAPP-AL can also be interpreted as a variant of the forward-backward operator splitting method in this context.
Funding:
This work was supported by the National Natural Science Foundation of China [Grant 71871140]. |
---|---|
ISSN: | 0364-765X 1526-5471 |
DOI: | 10.1287/moor.2022.1350 |