Augment then Smooth: Reconciling Differential Privacy with Certified Robustness
Machine learning models are susceptible to a variety of attacks that can erode trust, including attacks against the privacy of training data, and adversarial examples that jeopardize model accuracy. Differential privacy and certified robustness are effective frameworks for combating these two threat...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine learning models are susceptible to a variety of attacks that can
erode trust, including attacks against the privacy of training data, and
adversarial examples that jeopardize model accuracy. Differential privacy and
certified robustness are effective frameworks for combating these two threats
respectively, as they each provide future-proof guarantees. However, we show
that standard differentially private model training is insufficient for
providing strong certified robustness guarantees. Indeed, combining
differential privacy and certified robustness in a single system is
non-trivial, leading previous works to introduce complex training schemes that
lack flexibility. In this work, we present DP-CERT, a simple and effective
method that achieves both privacy and robustness guarantees simultaneously by
integrating randomized smoothing into standard differentially private model
training. Compared to the leading prior work, DP-CERT gives up to a 2.5%
increase in certified accuracy for the same differential privacy guarantee on
CIFAR10. Through in-depth per-sample metric analysis, we find that larger
certifiable radii correlate with smaller local Lipschitz constants, and show
that DP-CERT effectively reduces Lipschitz constants compared to other
differentially private training methods. The code is available at
github.com/layer6ai-labs/dp-cert. |
---|---|
DOI: | 10.48550/arxiv.2306.08656 |