Why current differential privacy schemes are inapplicable for correlated data publishing?
Although data analysis and mining technologies can efficiently provide intelligent and personalized services to us, data owners may not always be willing to share their true data because of privacy concerns. Recently, differential privacy (DP) technology has achieved a good trade-off between data ut...
Gespeichert in:
Veröffentlicht in: | World wide web (Bussum) 2021, Vol.24 (1), p.1-23 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although data analysis and mining technologies can efficiently provide intelligent and personalized services to us, data owners may not always be willing to share their true data because of privacy concerns. Recently, differential privacy (DP) technology has achieved a good trade-off between data utility and privacy guarantee by publishing noisy outputs. Nonetheless, DP still has a risk of privacy leakage when handling correlated data directly. Current schemes attempt to extend DP to publish correlated data, but are faced with the challenge of violating DP or low-level data utility. In this paper, we try to explore the essential cause of this inapplicability. Specifically, we suppose that this inapplicability is caused by the different correlations between noise and original data. To verify our supposition, we propose the notion of Correlation-Distinguishability Attack (CDA) to separate IID (Independent and Identically Distributed) noise from correlated data. Furthermore, taking time series as an example, we design an optimum filter to realize CDA in practical applications. Experimental results support our supposition and show that, the privacy degree of current approaches has a degradation under CDA. |
---|---|
ISSN: | 1386-145X 1573-1413 |
DOI: | 10.1007/s11280-020-00825-8 |