Nonconvex Distributed Optimization via Lasalle and Singular Perturbations

In this letter we address nonconvex distributed consensus optimization, a popular framework for distributed big-data analytics and learning. We consider the Gradient Tracking algorithm and, by resorting to an elegant system theoretical analysis, we show that agent estimates asymptotically reach cons...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE control systems letters 2023, Vol.7, p.301-306
Hauptverfasser: Carnevale, Guido, Notarstefano, Giuseppe
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this letter we address nonconvex distributed consensus optimization, a popular framework for distributed big-data analytics and learning. We consider the Gradient Tracking algorithm and, by resorting to an elegant system theoretical analysis, we show that agent estimates asymptotically reach consensus to a stationary point. We take advantage of suitable coordinates to write the Gradient Tracking as the interconnection of a fast dynamics and a slow one. To use a singular perturbation analysis, we separately study two auxiliary subsystems called boundary layer and reduced systems, respectively. We provide a Lyapunov function for the boundary layer system and use Lasalle-based arguments to show that trajectories of the reduced system converge to the set of stationary points. Finally, a customized version of a Lasalle's Invariance Principle for singularly perturbed systems is proved to show the convergence properties of the Gradient Tracking.
ISSN:2475-1456
2475-1456
DOI:10.1109/LCSYS.2022.3187918