Sine: Similarity is Not Enough for Mitigating Local Model Poisoning Attacks in Federated Learning
Federated learning is a collaborative machine learning paradigm that brings the model to the edge for training over the participants' local data under the orchestration of a trusted server. Though this paradigm protects data privacy, the aggregator has no control over the local data or model at...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on dependable and secure computing 2024-09, Vol.21 (5), p.4481-4494 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated learning is a collaborative machine learning paradigm that brings the model to the edge for training over the participants' local data under the orchestration of a trusted server. Though this paradigm protects data privacy, the aggregator has no control over the local data or model at the edge. So, malicious participants could perturb their locally held data or model to post an insidious update, degrading global model accuracy. Recent Byzantine-robust aggregation rules could defend against data poisoning attacks. Also, model poisoning attacks have become more ingenious and adaptive to the existing defenses. But these attacks are crafted against specific aggregation rules. This work presents a generic model poisoning attack framework named Sine (Similarity is not enough), which harnesses vulnerabilities in cosine similarity to increase the impact of poisoning attacks by 20-30%. Sine makes convergence unachievable by maintaining the persistence of the attack. Further, we propose an effective defense technique called FLTC (FL Trusted Coordinates) to avoid such issues. FLTC selects the trusted coordinates and aggregates them based on the change in their direction and magnitude with respect to a trusted base model update. FLTC could successfully defend against poisoning attacks, including adaptive model poisoning attacks, by restricting the attack impact to 2-4%. |
---|---|
ISSN: | 1545-5971 1941-0018 |
DOI: | 10.1109/TDSC.2024.3353317 |