CoCoFL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization
Transactions on Machine Learning Research, 06/2023 Devices participating in federated learning (FL) typically have heterogeneous communication, computation, and memory resources. However, in synchronous FL, all devices need to finish training by the same deadline dictated by the server. Our results...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transactions on Machine Learning Research, 06/2023 Devices participating in federated learning (FL) typically have heterogeneous
communication, computation, and memory resources. However, in synchronous FL,
all devices need to finish training by the same deadline dictated by the
server. Our results show that training a smaller subset of the neural network
(NN) at constrained devices, i.e., dropping neurons/filters as proposed by
state of the art, is inefficient, preventing these devices to make an effective
contribution to the model. This causes unfairness w.r.t the achievable
accuracies of constrained devices, especially in cases with a skewed
distribution of class labels across devices. We present a novel FL technique,
CoCoFL, which maintains the full NN structure on all devices. To adapt to the
devices' heterogeneous resources, CoCoFL freezes and quantizes selected layers,
reducing communication, computation, and memory requirements, whereas other
layers are still trained in full precision, enabling to reach a high accuracy.
Thereby, CoCoFL efficiently utilizes the available resources on devices and
allows constrained devices to make a significant contribution to the FL system,
increasing fairness among participants (accuracy parity) and significantly
improving the final accuracy of the model. |
---|---|
DOI: | 10.48550/arxiv.2203.05468 |