Heterogeneous Federated Learning via Generative Model-Aided Knowledge Distillation in the Edge
Federated Learning (FL) has been popular recently as a framework for training Machine Learning (ML) models in a distributed and privacy-preserving manner. Traditional FL frameworks often struggle with model and statistical heterogeneity among participating clients, impacting learning performance and...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2024-10, p.1-1 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated Learning (FL) has been popular recently as a framework for training Machine Learning (ML) models in a distributed and privacy-preserving manner. Traditional FL frameworks often struggle with model and statistical heterogeneity among participating clients, impacting learning performance and practicality. To overcome these fundamental limitations, we introduce Fed2KD+, a novel FL framework that leverages a set of tiny unified models and Conditional Variational Auto-Encoders (CVAEs) to enable FL training for heterogeneous models between network clients. Using forward and backward distillation processes, Fed2KD+ allows a seamless exchange of knowledge, mitigating data and heterogeneity problems of the model. Moreover, we propose a cosine similarity penalty in the loss function of CVAE+ to enhance the generalizability of CVAE for non-IID scenarios, improving the adaptability and efficiency of the framework. Furthermore, our framework design incorporates a co-design with Radio Access Network (RAN) architecture, reducing the fronthaul traffic volume and improving scalability. Extensive evaluations of one image and two IoT datasets demonstrate the superiority of Fed2KD+ in achieving higher accuracy and faster convergence compared to existing methods, including FedAvg, FedMD, and FedGen. Furthermore, we also performed hardware profiling on the Raspberry Pi and NVIDIA Jetson Nano to quantify the additional resources required to train the unified and CVAE+ models. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2024.3488565 |