Online Distributed Stochastic Gradient Algorithm for Non-Convex Optimization With Compressed Communication

This paper examines an online distributed optimization problem over an unbalanced digraph, in which a group of nodes in the network try to collectively search a minimizer of a time-varying global cost function while data is distributed among computing nodes. As the problem size becomes large, it wil...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automatic control 2023-10, p.1-16
Hauptverfasser: Li, Jueyou, Li, Chaojie, Fan, Jing, Huang, Tingwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper examines an online distributed optimization problem over an unbalanced digraph, in which a group of nodes in the network try to collectively search a minimizer of a time-varying global cost function while data is distributed among computing nodes. As the problem size becomes large, it will inevitably suffer from the communication bottleneck since each node that exchanges message potentially transmits large amounts of information to its neighbors. To handle the issue, we design an online stochastic gradient algorithm with compressed communication when the knowledge of the gradient is available. We obtain the regret bounds for both non-convex and convex cost functions, which can reach almost the same order of classic distributed optimization algorithms with exact communication. To resolve the scenario when the information of gradients is not accessible, a bandit version of the previous algorithm is then proposed. Explicit regret bounds of the bandit algorithm are also established for both non-convex and convex cost functions. The result reveals that the performance of the bandit- feedback method is almost close to that of the gradient- feedback method. Several numerical experiments corroborate the main theoretical findings obtained in this paper and exemplify a remarkable speedup when compared to existing distributed algorithms with exact communication.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2023.3327183