Improving Federated Learning Communication Efficiency with Global Momentum Fusion for Gradient Compression Schemes
Communication costs within Federated learning hinder the system scalability for reaching more data from more clients. The proposed FL adopts a hub-and-spoke network topology. All clients communicate through the central server. Hence, reducing communication overheads via techniques such as data compr...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Communication costs within Federated learning hinder the system scalability
for reaching more data from more clients. The proposed FL adopts a
hub-and-spoke network topology. All clients communicate through the central
server. Hence, reducing communication overheads via techniques such as data
compression has been proposed to mitigate this issue. Another challenge of
federated learning is unbalanced data distribution, data on each client are not
independent and identically distributed (non-IID) in a typical federated
learning setting. In this paper, we proposed a new compression compensation
scheme called Global Momentum Fusion (GMF) which reduces communication
overheads between FL clients and the server and maintains comparable model
accuracy in the presence of non-IID data. GitHub repository:
https://github.com/tony92151/global-momentum-fusion-fl |
---|---|
DOI: | 10.48550/arxiv.2211.09320 |