Mitigating Sybils in Federated Learning Poisoning
Machine learning (ML) over distributed multi-party data is required for a variety of domains. Existing approaches, such as federated learning, collect the outputs computed by a group of devices at a central aggregator and run iterative algorithms to train a globally shared model. Unfortunately, such...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning (ML) over distributed multi-party data is required for a
variety of domains. Existing approaches, such as federated learning, collect
the outputs computed by a group of devices at a central aggregator and run
iterative algorithms to train a globally shared model. Unfortunately, such
approaches are susceptible to a variety of attacks, including model poisoning,
which is made substantially worse in the presence of sybils.
In this paper we first evaluate the vulnerability of federated learning to
sybil-based poisoning attacks. We then describe \emph{FoolsGold}, a novel
defense to this problem that identifies poisoning sybils based on the diversity
of client updates in the distributed learning process. Unlike prior work, our
system does not bound the expected number of attackers, requires no auxiliary
information outside of the learning process, and makes fewer assumptions about
clients and their data.
In our evaluation we show that FoolsGold exceeds the capabilities of existing
state of the art approaches to countering sybil-based label-flipping and
backdoor poisoning attacks. Our results hold for different distributions of
client data, varying poisoning targets, and various sybil strategies.
Code can be found at: https://github.com/DistributedML/FoolsGold |
---|---|
DOI: | 10.48550/arxiv.1808.04866 |