Practical Privacy Filters and Odometers with R\'enyi Differential Privacy and Applications to Differentially Private Deep Learning
Differential Privacy (DP) is the leading approach to privacy preserving deep learning. As such, there are multiple efforts to provide drop-in integration of DP into popular frameworks. These efforts, which add noise to each gradient computation to make it DP, rely on composition theorems to bound th...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Differential Privacy (DP) is the leading approach to privacy preserving deep
learning. As such, there are multiple efforts to provide drop-in integration of
DP into popular frameworks. These efforts, which add noise to each gradient
computation to make it DP, rely on composition theorems to bound the total
privacy loss incurred over this sequence of DP computations.
However, existing composition theorems present a tension between efficiency
and flexibility. Most theorems require all computations in the sequence to have
a predefined DP parameter, called the privacy budget. This prevents the design
of training algorithms that adapt the privacy budget on the fly, or that
terminate early to reduce the total privacy loss. Alternatively, the few
existing composition results for adaptive privacy budgets provide complex
bounds on the privacy loss, with constants too large to be practical.
In this paper, we study DP composition under adaptive privacy budgets through
the lens of R\'enyi Differential Privacy, proving a simpler composition theorem
with smaller constants, making it practical enough to use in algorithm design.
We demonstrate two applications of this theorem for DP deep learning: adapting
the noise or batch size online to improve a model's accuracy within a fixed
total privacy loss, and stopping early when fine-tuning a model to reduce total
privacy loss. |
---|---|
DOI: | 10.48550/arxiv.2103.01379 |