Nested Distributed Gradient Methods with Adaptive Quantized Communication
In this paper, we consider minimizing a sum of local convex objective functions in a distributed setting, where communication can be costly. We propose and analyze a class of nested distributed gradient methods with adaptive quantized communication (NEAR-DGD+Q). We show the effect of performing mult...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we consider minimizing a sum of local convex objective
functions in a distributed setting, where communication can be costly. We
propose and analyze a class of nested distributed gradient methods with
adaptive quantized communication (NEAR-DGD+Q). We show the effect of performing
multiple quantized communication steps on the rate of convergence and on the
size of the neighborhood of convergence, and prove R-Linear convergence to the
exact solution with increasing number of consensus steps and adaptive
quantization. We test the performance of the method, as well as some practical
variants, on quadratic functions, and show the effects of multiple quantized
communication steps in terms of iterations/gradient evaluations, communication
and cost. |
---|---|
DOI: | 10.48550/arxiv.1903.08149 |