Generalization Techniques Empirically Outperform Differential Privacy against Membership Inference
Differentially private training algorithms provide protection against one of the most popular attacks in machine learning: the membership inference attack. However, these privacy algorithms incur a loss of the model's classification accuracy, therefore creating a privacy-utility trade-off. The...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Differentially private training algorithms provide protection against one of
the most popular attacks in machine learning: the membership inference attack.
However, these privacy algorithms incur a loss of the model's classification
accuracy, therefore creating a privacy-utility trade-off. The amount of noise
that differential privacy requires to provide strong theoretical protection
guarantees in deep learning typically renders the models unusable, but authors
have observed that even lower noise levels provide acceptable empirical
protection against existing membership inference attacks.
In this work, we look for alternatives to differential privacy towards
empirically protecting against membership inference attacks. We study the
protection that simply following good machine learning practices (not designed
with privacy in mind) offers against membership inference. We evaluate the
performance of state-of-the-art techniques, such as pre-training and
sharpness-aware minimization, alone and with differentially private training
algorithms, and find that, when using early stopping, the algorithms without
differential privacy can provide both higher utility and higher privacy than
their differentially private counterparts. These findings challenge the belief
that differential privacy is a good defense to protect against existing
membership inference attacks |
---|---|
DOI: | 10.48550/arxiv.2110.05524 |