Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning
Federated learning (FL) is vulnerable to poisoning attacks, where adversaries corrupt the global aggregation results and cause denial-of-service (DoS). Unlike recent model poisoning attacks that optimize the amplitude of malicious perturbations along certain prescribed directions to cause DoS, we pr...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated learning (FL) is vulnerable to poisoning attacks, where adversaries
corrupt the global aggregation results and cause denial-of-service (DoS).
Unlike recent model poisoning attacks that optimize the amplitude of malicious
perturbations along certain prescribed directions to cause DoS, we propose a
Flexible Model Poisoning Attack (FMPA) that can achieve versatile attack goals.
We consider a practical threat scenario where no extra knowledge about the FL
system (e.g., aggregation rules or updates on benign devices) is available to
adversaries. FMPA exploits the global historical information to construct an
estimator that predicts the next round of the global model as a benign
reference. It then fine-tunes the reference model to obtain the desired
poisoned model with low accuracy and small perturbations. Besides the goal of
causing DoS, FMPA can be naturally extended to launch a fine-grained
controllable attack, making it possible to precisely reduce the global
accuracy. Armed with precise control, malicious FL service providers can gain
advantages over their competitors without getting noticed, hence opening a new
attack surface in FL other than DoS. Even for the purpose of DoS, experiments
show that FMPA significantly decreases the global accuracy, outperforming six
state-of-the-art attacks. |
---|---|
DOI: | 10.48550/arxiv.2304.10783 |