Model Compression by Count Sketch for Over-the-Air Stateless Federated Learning
Motivated by the rapidly increasing computing performance of devices and the abundance of device-generated data, federated learning (FL) has emerged as a new distributed machine learning (ML) scheme with a wide range of applications. However, it is well-known that FL might be severely degraded by co...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2024-06, Vol.11 (12), p.21689-21703 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Motivated by the rapidly increasing computing performance of devices and the abundance of device-generated data, federated learning (FL) has emerged as a new distributed machine learning (ML) scheme with a wide range of applications. However, it is well-known that FL might be severely degraded by communication overhead, as it heavily relies on communication between clients and a central server. To overcome this communication bottleneck, the wireless communication community has explored AirComp FL, applying over-the-air computation (AirComp) for model aggregation. In this article, we introduce a novel AirComp FL algorithm, A-FedCS, which utilizes count sketch (CS) for model compression. A-FedCS exhibits scalability, addressing challenges faced by existing approaches struggling with scarce channel resources or rarely revisiting clients. Experimental results demonstrate that the proposed scheme outperforms state-of-the-art schemes, including CA-DSGD and D-DSGD. We show that the improvement is more significant in stateless FL through experiments with various settings of tasks, transmission power, bandwidth, and the number of clients. Additionally, we provide a mathematical analysis of A-FedCS by deriving its convergence rate. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2024.3376771 |