Federated Learning with Autotuned Communication-Efficient Secure Aggregation
Federated Learning enables mobile devices to collaboratively learn a shared inference model while keeping all the training data on a user's device, decoupling the ability to do machine learning from the need to store the data in the cloud. Existing work on federated learning with limited commun...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated Learning enables mobile devices to collaboratively learn a shared
inference model while keeping all the training data on a user's device,
decoupling the ability to do machine learning from the need to store the data
in the cloud. Existing work on federated learning with limited communication
demonstrates how random rotation can enable users' model updates to be
quantized much more efficiently, reducing the communication cost between users
and the server. Meanwhile, secure aggregation enables the server to learn an
aggregate of at least a threshold number of device's model contributions
without observing any individual device's contribution in unaggregated form. In
this paper, we highlight some of the challenges of setting the parameters for
secure aggregation to achieve communication efficiency, especially in the
context of the aggressively quantized inputs enabled by random rotation. We
then develop a recipe for auto-tuning communication-efficient secure
aggregation, based on specific properties of random rotation and secure
aggregation -- namely, the predictable distribution of vector entries
post-rotation and the modular wrapping inherent in secure aggregation. We
present both theoretical results and initial experiments. |
---|---|
DOI: | 10.48550/arxiv.1912.00131 |