Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model while keeping client training data private, even from an untrusted server. Prior works do not provide efficient solutions that protect against collusion attacks in which parties collabor...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Privacy-preserving federated learning enables a population of distributed
clients to jointly learn a shared model while keeping client training data
private, even from an untrusted server. Prior works do not provide efficient
solutions that protect against collusion attacks in which parties collaborate
to expose an honest client's model parameters. We present an efficient
mechanism based on oblivious distributed differential privacy that is the first
to protect against such client collusion, including the "Sybil" attack in which
a server preferentially selects compromised devices or simulates fake devices.
We leverage the novel privacy mechanism to construct a secure federated
learning protocol and prove the security of that protocol. We conclude with
empirical analysis of the protocol's execution speed, learning accuracy, and
privacy performance on two data sets within a realistic simulation of 5,000
distributed network clients. |
---|---|
DOI: | 10.48550/arxiv.2202.09897 |