Decentralized Sum-of-Nonconvex Optimization
We consider the optimization problem of minimizing the sum-of-nonconvex function, i.e., a convex function that is the average of nonconvex components. The existing stochastic algorithms for such a problem only focus on a single machine and the centralized scenario. In this paper, we study the sum-of...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider the optimization problem of minimizing the sum-of-nonconvex
function, i.e., a convex function that is the average of nonconvex components.
The existing stochastic algorithms for such a problem only focus on a single
machine and the centralized scenario. In this paper, we study the
sum-of-nonconvex optimization in the decentralized setting. We present a new
theoretical analysis of the PMGT-SVRG algorithm for this problem and prove the
linear convergence of their approach. However, the convergence rate of the
PMGT-SVRG algorithm has a linear dependency on the condition number, which is
undesirable for the ill-conditioned problem. To remedy this issue, we propose
an accelerated stochastic decentralized first-order algorithm by incorporating
the techniques of acceleration, gradient tracking, and multi-consensus mixing
into the SVRG algorithm. The convergence rate of the proposed method has a
square-root dependency on the condition number. The numerical experiments
validate the theoretical guarantee of our proposed algorithms on both synthetic
and real-world datasets. |
---|---|
DOI: | 10.48550/arxiv.2402.02356 |