A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks

Although federated learning offers a level of privacy by aggregating user data without direct access, it remains inherently vulnerable to various attacks, including poisoning attacks where malicious actors submit gradients that reduce model accuracy. In addressing model poisoning attacks, existing d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information forensics and security 2024, Vol.19, p.6693-6708
Hauptverfasser: Yazdinejad, Abbas, Dehghantanha, Ali, Karimipour, Hadis, Srivastava, Gautam, Parizi, Reza M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although federated learning offers a level of privacy by aggregating user data without direct access, it remains inherently vulnerable to various attacks, including poisoning attacks where malicious actors submit gradients that reduce model accuracy. In addressing model poisoning attacks, existing defense strategies primarily concentrate on detecting suspicious local gradients over plaintext. However, detecting non-independent and identically distributed encrypted gradients poses significant challenges for existing methods. Moreover, tackling computational complexity and communication overhead becomes crucial in privacy-preserving federated learning, particularly in the context of encrypted gradients. To address these concerns, we propose a robust privacy-preserving federated learning model resilient against model poisoning attacks without sacrificing accuracy. Our approach introduces an internal auditor that evaluates encrypted gradient similarity and distribution to differentiate between benign and malicious gradients, employing a Gaussian Mixture Model and Mahalanobis Distance for byzantine-tolerant aggregation. The proposed model utilizes Additive Homomorphic Encryption to ensure confidentiality while minimizing computational and communication overhead. Our model demonstrates superior performance in accuracy and privacy compared to existing strategies and encryption techniques, such as Fully Homomorphic Encryption and Two-Trapdoor Homomorphic Encryption. The proposed model effectively addresses the challenge of detecting maliciously encrypted non-independent and identically distributed gradients with low computational and communication overhead.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2024.3420126