Privacy-Enhancing and Robust Backdoor Defense for Federated Learning on Heterogeneous Data

Federated learning (FL) allows multiple clients to train deep learning models collaboratively while protecting sensitive local datasets. However, FL has been highly susceptible to security for federated backdoor attacks (FBA) through injecting triggers and privacy for potential data leakage from upl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information forensics and security 2024-01, Vol.19, p.1-1
Hauptverfasser: Chen, Zekai, Yu, Shengxing, Fan, Mingyuan, Liu, Ximeng, Deng, Robert H.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated learning (FL) allows multiple clients to train deep learning models collaboratively while protecting sensitive local datasets. However, FL has been highly susceptible to security for federated backdoor attacks (FBA) through injecting triggers and privacy for potential data leakage from uploaded models in practical application scenarios. FBA defense strategies consider specific and limited attacker models, and a sufficient amount of noise injected can only mitigate rather than eliminate the attack. To address these deficiencies, we introduce a Robust Federated Backdoor Defense Scheme (RFBDS) and Privacy-preserving RFBDS (PrivRFBDS) to ensure the elimination of adversarial backdoors. Our RFBDS to overcome FBA consists of amplified magnitude sparsification, adaptive OPTICS clustering, and adaptive clipping. The experimental evaluation of RFBDS is conducted on three benchmark datasets and an extensive comparison is made with state-of-the-art studies. The results demonstrate the promising defense performance from RFBDS, moderately improved by 31.75% ~ 73.75% in clustering defense methods, and 0.03% ~ 56.90% for Non-IID to the utmost extent for the average FBA success rate over MNIST, FMNIST, and CIFAR10. Besides, our privacy-preserving shuffling in PrivRFBDS maintains is 7.83e -5 ~ 0.42× that of state-of-the-art works.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2023.3326983