Personalized and privacy-enhanced federated learning framework via knowledge distillation

Federated learning is a distributed learning framework in which all participants jointly train a global model to ensure data privacy. In the existing federated learning framework, all clients share the same global model and cannot customize the model architecture according to their needs. In this pa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2024-03, Vol.575, p.127290, Article 127290
Hauptverfasser: Yu, Fangchao, Wang, Lina, Zeng, Bo, Zhao, Kai, Yu, Rongwei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated learning is a distributed learning framework in which all participants jointly train a global model to ensure data privacy. In the existing federated learning framework, all clients share the same global model and cannot customize the model architecture according to their needs. In this paper, we propose FLKD (federated learning with knowledge distillation), a personalized and privacy-enhanced federated learning framework. The global model will serve as a medium for knowledge transfer in FLKD, and the client can customize the local model while training with the global model by mutual learning. Furthermore, the participation of the heterogeneous local models changes the training strategy of the global model, which means that FLKD has a natural immune effect against gradient leakage attacks. We conduct extensive empirical experiments to support the training and evaluation of our framework. Results of experiments show that FLKD provides an effective way to solve the problem of model heterogeneity and can effectively defend against gradient leakage attacks.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2024.127290