DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning
Federated learning (FL) has become an emerging machine learning technique lately due to its efficacy in safeguarding the client's confidential information. Nevertheless, despite the inherent and additional privacy-preserving mechanisms (e.g., differential privacy, secure multi-party computation...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated learning (FL) has become an emerging machine learning technique
lately due to its efficacy in safeguarding the client's confidential
information. Nevertheless, despite the inherent and additional
privacy-preserving mechanisms (e.g., differential privacy, secure multi-party
computation, etc.), the FL models are still vulnerable to various
privacy-violating and security-compromising attacks (e.g., data or model
poisoning) due to their numerous attack vectors which in turn, make the models
either ineffective or sub-optimal. Existing adversarial models focusing on
untargeted model poisoning attacks are not enough stealthy and persistent at
the same time because of their conflicting nature (large scale attacks are
easier to detect and vice versa) and thus, remain an unsolved research problem
in this adversarial learning paradigm. Considering this, in this paper, we
analyze this adversarial learning process in an FL setting and show that a
stealthy and persistent model poisoning attack can be conducted exploiting the
differential noise. More specifically, we develop an unprecedented DP-exploited
stealthy model poisoning (DeSMP) attack for FL models. Our empirical analysis
on both the classification and regression tasks using two popular datasets
reflects the effectiveness of the proposed DeSMP attack. Moreover, we develop a
novel reinforcement learning (RL)-based defense strategy against such model
poisoning attacks which can intelligently and dynamically select the privacy
level of the FL models to minimize the DeSMP attack surface and facilitate the
attack detection. |
---|---|
DOI: | 10.48550/arxiv.2109.09955 |