Distributed Stochastic Optimization With Unbounded Subgradients Over Randomly Time-Varying Networks
Motivated by distributed statistical learning over uncertain communication networks, we study distributed stochastic optimization by networked nodes to cooperatively minimize a sum of convex cost functions. The network is modeled by a sequence of time-varying random digraphs with each node represent...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Motivated by distributed statistical learning over uncertain communication
networks, we study distributed stochastic optimization by networked nodes to
cooperatively minimize a sum of convex cost functions. The network is modeled
by a sequence of time-varying random digraphs with each node representing a
local optimizer and each edge representing a communication link. We consider
the distributed subgradient optimization algorithm with noisy measurements of
local cost functions' subgradients, additive and multiplicative noises among
information exchanging between each pair of nodes. By stochastic Lyapunov
method, convex analysis, algebraic graph theory and martingale convergence
theory, we prove that if the local subgradient functions grow linearly and the
sequence of digraphs is conditionally balanced and uniformly conditionally
jointly connected, then proper algorithm step sizes can be designed so that all
nodes' states converge to the global optimal solution almost surely. |
---|---|
DOI: | 10.48550/arxiv.2008.08796 |