Backdoor Defense in Federated Learning Using Differential Testing and Outlier Detection
The goal of federated learning (FL) is to train one global model by aggregating model parameters updated independently on edge devices without accessing users' private data. However, FL is susceptible to backdoor attacks where a small fraction of malicious agents inject a targeted misclassifica...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The goal of federated learning (FL) is to train one global model by
aggregating model parameters updated independently on edge devices without
accessing users' private data. However, FL is susceptible to backdoor attacks
where a small fraction of malicious agents inject a targeted misclassification
behavior in the global model by uploading polluted model updates to the server.
In this work, we propose DifFense, an automated defense framework to protect an
FL system from backdoor attacks by leveraging differential testing and two-step
MAD outlier detection, without requiring any previous knowledge of attack
scenarios or direct access to local model parameters. We empirically show that
our detection method prevents a various number of potential attackers while
consistently achieving the convergence of the global model comparable to that
trained under federated averaging (FedAvg). We further corroborate the
effectiveness and generalizability of our method against prior defense
techniques, such as Multi-Krum and coordinate-wise median aggregation. Our
detection method reduces the average backdoor accuracy of the global model to
below 4% and achieves a false negative rate of zero. |
---|---|
DOI: | 10.48550/arxiv.2202.11196 |