Proximal Regularization for the Saddle Point Gradient Dynamics

This article concerns the solution of a convex optimization problem through the saddle point gradient dynamics. Instead of using the standard Lagrangian as is classical in this method, we consider a regularized Lagrangian obtained through a proximal minimization step. We show that, without assumptio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automatic control 2021-09, Vol.66 (9), p.4385-4392
Hauptverfasser: Goldsztajn, Diego, Paganini, Fernando
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article concerns the solution of a convex optimization problem through the saddle point gradient dynamics. Instead of using the standard Lagrangian as is classical in this method, we consider a regularized Lagrangian obtained through a proximal minimization step. We show that, without assumptions of smoothness or strict convexity in the original problem, the regularized Lagrangian is smooth and leads to globally convergent saddle point dynamics. The method is demonstrated through an application to resource allocation in cloud computing.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2020.3045124