iDP-FL: A fine-grained and privacy-aware federated learning framework for deep neural networks

Federated learning (FL), as a distributed machine learning paradigm, essentially promises that multiple parties can jointly train the model collaboratively without sharing local data. Recent research demonstrates that the adversary can deduce the sensitive data through shared model updates. To prote...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences 2024-09, Vol.679, p.121035, Article 121035
Hauptverfasser: Zhang, Junpeng, Zhu, Hui, Wang, Fengwei, Zheng, Yandong, Liu, Zhe, Li, Hui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated learning (FL), as a distributed machine learning paradigm, essentially promises that multiple parties can jointly train the model collaboratively without sharing local data. Recent research demonstrates that the adversary can deduce the sensitive data through shared model updates. To protect the data privacy of the participants, differential privacy (DP) is deployed in various FL scenarios due to the lightweight computational overhead. However, the trade-off between the availability and privacy of local models is the fundamental problem that needs to be solved in DP applications. In this paper, we propose a fine-grained and privacy-aware FL framework (iDP-FL) to enable training data and model parameters to satisfy confidentiality while markedly improving the model's prediction accuracy. Specifically, we first design an individualized perturbation noise (IPN) algorithm that adds different artificial noises dependent on the importance of each participant's model weight. Then, we propose a perturbation mechanism on the aggregator side, a DP protection method under the premise of loss function convergence, which prevents the global model parameters from being stolen by malicious adversaries. Moreover, to achieve lightweight protection throughout the learning, we present an advanced bilateral perturbation (ABP) protocol to perform iterative training. Theoretical analysis indicates that iDP-FL provides the DP guarantee, which yields superior prediction accuracy and excellent privacy-preserving with the same privacy level. Finally, extensive experiments conducted on real-world datasets demonstrate that our approach shows significant advantages with limited privacy budgets, especially at small privacy losses.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2024.121035