Towards multi-party targeted model poisoning attacks against federated learning systems
The federated learning framework builds a deep learning model collaboratively by a group of connected devices via only sharing local parameter updates to the central parameter server. Nonetheless, the lack of transparency in the local data resource makes it prone to adversarial federated attacks, wh...
Gespeichert in:
Veröffentlicht in: | High-Confidence Computing 2021-06, Vol.1 (1), p.100002, Article 100002 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The federated learning framework builds a deep learning model collaboratively by a group of connected devices via only sharing local parameter updates to the central parameter server. Nonetheless, the lack of transparency in the local data resource makes it prone to adversarial federated attacks, which have shown increasing ability to reduce learning performance. Existing research efforts either focus on the single-party attack with impractical perfect knowledge setting and limited stealthy ability or the random attack that has no control on attack effects. In this paper, we investigate a new multi-party adversarial attack with the imperfect knowledge of the target system. Controlled by an adversary, a number of compromised devices collaboratively launch targeted model poisoning attacks, intending to misclassify the targeted samples while maintaining stealthy under different detection strategies. Specifically, the compromised devices jointly minimize the loss function of model training in different scenarios. To overcome the update scaling problem, we develop a new boosting strategy by introducing two stealthy metrics. Via experimental results, we show that under both perfect knowledge and limited knowledge settings, the multi-party attack is capable of successfully evading detection strategies while guaranteeing the convergence. We also demonstrate that the learned model achieves the high accuracy on the targeted samples, which confirms the significant impact of the multi-party attack on federated learning systems. |
---|---|
ISSN: | 2667-2952 2667-2952 |
DOI: | 10.1016/j.hcc.2021.100002 |