A Second-Order Projected Primal-Dual Dynamical System for Distributed Optimization and Learning

This article focuses on developing distributed optimization strategies for a class of machine learning problems over a directed network of computing agents. In these problems, the global objective function is an addition function, which is composed of local objective functions. Such local objective...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2023-09, Vol.34 (9), p.6568-6577
Hauptverfasser: Wang, Xiaoxuan, Yang, Shaofu, Guo, Zhenyuan, Huang, Tingwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article focuses on developing distributed optimization strategies for a class of machine learning problems over a directed network of computing agents. In these problems, the global objective function is an addition function, which is composed of local objective functions. Such local objective functions are convex and only endowed by the corresponding computing agent. A second-order Nesterov accelerated dynamical system with time-varying damping coefficient is developed to address such problems. To effectively deal with the constraints in the problems, the projected primal-dual method is carried out in the Nesterov accelerated system. By means of the cocoercive maximal monotone operator, it is shown that the trajectories of the Nesterov accelerated dynamical system can reach consensus at the optimal solution, provided that the damping coefficient and gains meet technical conditions. In the end, the validation of the theoretical results is demonstrated by the email classification problem and the logistic regression problem in machine learning.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2021.3127883