Federated Learning with Buffered Asynchronous Aggregation
Scalability and privacy are two critical concerns for cross-device federated learning (FL) systems. In this work, we identify that synchronous FL - synchronized aggregation of client updates in FL - cannot scale efficiently beyond a few hundred clients training in parallel. It leads to diminishing r...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Scalability and privacy are two critical concerns for cross-device federated
learning (FL) systems. In this work, we identify that synchronous FL -
synchronized aggregation of client updates in FL - cannot scale efficiently
beyond a few hundred clients training in parallel. It leads to diminishing
returns in model performance and training speed, analogous to large-batch
training. On the other hand, asynchronous aggregation of client updates in FL
(i.e., asynchronous FL) alleviates the scalability issue. However, aggregating
individual client updates is incompatible with Secure Aggregation, which could
result in an undesirable level of privacy for the system. To address these
concerns, we propose a novel buffered asynchronous aggregation method, FedBuff,
that is agnostic to the choice of optimizer, and combines the best properties
of synchronous and asynchronous FL. We empirically demonstrate that FedBuff is
3.3x more efficient than synchronous FL and up to 2.5x more efficient than
asynchronous FL, while being compatible with privacy-preserving technologies
such as Secure Aggregation and differential privacy. We provide theoretical
convergence guarantees in a smooth non-convex setting. Finally, we show that
under differentially private training, FedBuff can outperform FedAvgM at low
privacy settings and achieve the same utility for higher privacy settings. |
---|---|
DOI: | 10.48550/arxiv.2106.06639 |