Federated Learning: Strategies for Improving Communication Efficiency
Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated Learning is a machine learning setting where the goal is to train a
high-quality centralized model while training data remains distributed over a
large number of clients each with unreliable and relatively slow network
connections. We consider learning algorithms for this setting where on each
round, each client independently computes an update to the current model based
on its local data, and communicates this update to a central server, where the
client-side updates are aggregated to compute a new global model. The typical
clients in this setting are mobile phones, and communication efficiency is of
the utmost importance.
In this paper, we propose two ways to reduce the uplink communication costs:
structured updates, where we directly learn an update from a restricted space
parametrized using a smaller number of variables, e.g. either low-rank or a
random mask; and sketched updates, where we learn a full model update and then
compress it using a combination of quantization, random rotations, and
subsampling before sending it to the server. Experiments on both convolutional
and recurrent networks show that the proposed methods can reduce the
communication cost by two orders of magnitude. |
---|---|
DOI: | 10.48550/arxiv.1610.05492 |