When Differential Privacy Implies Syntactic Privacy

Two main privacy models for sanitising datasets are differential privacy (DP) and syntactic privacy . The former restricts individual values' impact on the output based on the dataset while the latter restructures the dataset before publication to link any record to multiple sensitive data valu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information forensics and security 2022, Vol.17, p.2110-2124
Hauptverfasser: Ekenstedt, Emelie, Ong, Lawrence, Liu, Yucheng, Johnson, Sarah, Yeoh, Phee Lep, Kliewer, Joerg
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Two main privacy models for sanitising datasets are differential privacy (DP) and syntactic privacy . The former restricts individual values' impact on the output based on the dataset while the latter restructures the dataset before publication to link any record to multiple sensitive data values. Besides both providing mechanisms to sanitise data, these models are often applied independently of each other and very little is known regarding how they relate. Knowing how privacy models are related can help us develop a deeper understanding of privacy and can inform how a single privacy mechanism can fulfil multiple privacy models. In this paper, we introduce a framework that determines if the privacy mechanisms of one privacy model can also guarantee privacy for another privacy model. We apply our framework to understand the relationship between DP and a form of syntactic privacy called t -closeness. We demonstrate, for the first time, how DP and t -closeness can be interpreted in terms of each other by introducing generalisations and extensions of both models to explain the transition from one model to the other. Finally, we show how applying one mechanism to guarantee multiple privacy models increases data utility compared to applying separate mechanisms for each privacy model.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2022.3177953