Federated Learning Communication-Efficiency Framework via Corset Construction
Abstract Federated learning (FL) can learn a shared global model across multiple client devices without breaching privacy requirements. But an essential challenge is that devices in FL usually have limited network bandwidth, resulting in inefficient communication as an important bottleneck for FL im...
Gespeichert in:
Veröffentlicht in: | Computer journal 2023-09, Vol.66 (9), p.2077-2101 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Abstract
Federated learning (FL) can learn a shared global model across multiple client devices without breaching privacy requirements. But an essential challenge is that devices in FL usually have limited network bandwidth, resulting in inefficient communication as an important bottleneck for FL implementation. Current studies try to overcome this shortcoming by compressing the number of model update bits uploaded by every client. But they did not explore the underlying reason why redundant parameters are generated. In this paper, we propose Corset-Based Federated Learning framework (CBFL) —a novel FL communication framework from the perspective of redundancy data. Instead of training full datasets on a regular network model, CBFL trains a much smaller evolution network model on extracted corset, which intrinsically reduces the overall transmission bits and obtains efficient computation while maintaining a desirable model accuracy. In CBFL, a novel Fedcorset construction algorithm at selected clients and a further distributed model evolution scheme to fit the constructed corset are included. The training model size is dynamically adapted to the corset, either removing a fraction of unimportant or adding important connections at each communication iteration. Experimental results show that CBFL transfers about 13% of communication bits and saves around 56% computing time while having only 2% destination in model accuracy. |
---|---|
ISSN: | 0010-4620 1460-2067 |
DOI: | 10.1093/comjnl/bxac062 |