On the Convergence of NEAR-DGD for Nonconvex Optimization with Second Order Guarantees
We consider the setting where the nodes of an undirected, connected network collaborate to solve a shared objective modeled as the sum of smooth functions. We assume that each summand is privately known by a unique node. NEAR-DGD is a distributed first order method which permits adjusting the amount...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider the setting where the nodes of an undirected, connected network
collaborate to solve a shared objective modeled as the sum of smooth functions.
We assume that each summand is privately known by a unique node. NEAR-DGD is a
distributed first order method which permits adjusting the amount of
communication between nodes relative to the amount of computation performed
locally in order to balance convergence accuracy and total application cost. In
this work, we generalize the convergence properties of a variant of NEAR-DGD
from the strongly convex to the nonconvex case. Under mild assumptions, we show
convergence to minimizers of a custom Lyapunov function. Moreover, we
demonstrate that the gap between those minimizers and the second order
stationary solutions of the original problem can become arbitrarily small
depending on the choice of algorithm parameters. Finally, we accompany our
theoretical analysis with a numerical experiment to evaluate the empirical
performance of NEAR-DGD in the nonconvex setting. |
---|---|
DOI: | 10.48550/arxiv.2103.14233 |