Privacy-Preserving Federated Learning with Malicious Clients and Honest-but-Curious Servers

Federated learning (FL) enables multiple clients to jointly train a global learning model while keeping their training data locally, thereby protecting clients' privacy. However, there still exist some security issues in FL, e.g., the honest-but-curious servers may mine privacy from clients...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information forensics and security 2023-01, Vol.18, p.1-1
Hauptverfasser: Le, Junqing, Zhang, Di, Lei, Xinyu, Jiao, Long, Zeng, Kai, Liao, Xiaofeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated learning (FL) enables multiple clients to jointly train a global learning model while keeping their training data locally, thereby protecting clients' privacy. However, there still exist some security issues in FL, e.g., the honest-but-curious servers may mine privacy from clients' model updates, and the malicious clients may launch poisoning attacks to disturb or break global model training. Moreover, most previous works focus on the security issues of FL in the presence of only honest-but-curious servers or only malicious clients. In this paper, we consider a stronger and more practical threat model in FL, where the honest-but-curious servers and malicious clients coexist, named as the non-fully trusted model. In the non-fully trusted FL, privacy protection schemes for honest-but-curious servers are executed to ensure that all model updates are indistinguishable, which makes malicious model updates difficult to detect. Toward this end, we present an Adaptive Privacy-Preserving FL (Ada-PPFL) scheme with Differential Privacy (DP) as the underlying technology, to simultaneously protect clients' privacy and eliminate the adverse effects of malicious clients on model training. Specifically, we propose an adaptive DP strategy to achieve strong client-level privacy protection while minimizing the impact on the prediction accuracy of the global model. In addition, we introduce DPAD, an algorithm specifically designed to precisely detect malicious model updates, even in cases where the updates are protected by DP measures. Finally, the theoretical analysis and experimental results further illustrate that the proposed Ada-PPFL enables client-level privacy protection with 35% DP-noise savings, and maintains similar prediction accuracy to models without malicious attacks.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2023.3295949