Auditing Privacy Mechanisms via Label Inference Attacks
We propose reconstruction advantage measures to audit label privatization mechanisms. A reconstruction advantage measure quantifies the increase in an attacker's ability to infer the true label of an unlabeled example when provided with a private version of the labels in a dataset (e.g., aggreg...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose reconstruction advantage measures to audit label privatization
mechanisms. A reconstruction advantage measure quantifies the increase in an
attacker's ability to infer the true label of an unlabeled example when
provided with a private version of the labels in a dataset (e.g., aggregate of
labels from different users or noisy labels output by randomized response),
compared to an attacker that only observes the feature vectors, but may have
prior knowledge of the correlation between features and labels. We consider two
such auditing measures: one additive, and one multiplicative. These incorporate
previous approaches taken in the literature on empirical auditing and
differential privacy. The measures allow us to place a variety of proposed
privatization schemes -- some differentially private, some not -- on the same
footing. We analyze these measures theoretically under a distributional model
which encapsulates reasonable adversarial settings. We also quantify their
behavior empirically on real and simulated prediction tasks. Across a range of
experimental settings, we find that differentially private schemes dominate or
match the privacy-utility tradeoff of more heuristic approaches. |
---|---|
DOI: | 10.48550/arxiv.2406.02797 |