The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation
Dataset obfuscation refers to techniques in which random noise is added to the entries of a given dataset, prior to its public release, to protect against leakage of private information. In this work, dataset obfuscation under two objectives is considered: i) rank-preservation: to preserve the row o...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Shariatnasab, Mahshad Shirani, Farhad Iyengar, S. Sitharma |
description | Dataset obfuscation refers to techniques in which random noise is added to
the entries of a given dataset, prior to its public release, to protect against
leakage of private information. In this work, dataset obfuscation under two
objectives is considered: i) rank-preservation: to preserve the row ordering in
the obfuscated dataset induced by a given rank function, and ii) anonymity: to
protect user anonymity under fingerprinting attacks. The first objective,
rank-preservation, is of interest in applications such as the design of search
engines and recommendation systems, feature matching, and social network
analysis. Fingerprinting attacks, considered in evaluating the anonymity
objective, are privacy attacks where an attacker constructs a fingerprint of a
victim based on its observed activities, such as online web activities, and
compares this fingerprint with information extracted from a publicly released
obfuscated dataset to identify the victim. By evaluating the performance limits
of a class of obfuscation mechanisms over asymptotically large datasets, a
fundamental trade-off is quantified between rank-preservation and user
anonymity. Single-letter obfuscation mechanisms are considered, where each
entry in the dataset is perturbed by independent noise, and their fundamental
performance limits are characterized by leveraging large deviation techniques.
The optimal obfuscating test-channel, optimizing the privacy-utility tradeoff,
is characterized in the form of a convex optimization problem which can be
solved efficiently. Numerical simulations of various scenarios are provided to
verify the theoretical derivations. |
doi_str_mv | 10.48550/arxiv.2305.07079 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2305_07079</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2305_07079</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-6eb0c310adfe093f276fc986f5f1d76f3b81e15fbed7f623cf261b9cb8a443ea3</originalsourceid><addsrcrecordid>eNotz71OwzAUhmEvDKhwAUz4BhzsOLbjEYVfqVIrlM7RsXNOsSgpckJE7h4oTN87fdLD2JWSRVUbI28gf6W5KLU0hXTS-XPWtK_ItznNEBexm9IhTQtvM_R4JOJp4C8wvIltxhHznIY9v4MJRpz4JtDnGGFKx-GCnREcRrz83xVrH-7b5kmsN4_Pze1agHVeWAwyaiWhJ5ReU-ksRV9bMqT6n9ahVqgMBewd2VJHKq0KPoYaqkoj6BW7_rs9KbqPnN4hL92vpjtp9Dc4bEXO</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation</title><source>arXiv.org</source><creator>Shariatnasab, Mahshad ; Shirani, Farhad ; Iyengar, S. Sitharma</creator><creatorcontrib>Shariatnasab, Mahshad ; Shirani, Farhad ; Iyengar, S. Sitharma</creatorcontrib><description>Dataset obfuscation refers to techniques in which random noise is added to
the entries of a given dataset, prior to its public release, to protect against
leakage of private information. In this work, dataset obfuscation under two
objectives is considered: i) rank-preservation: to preserve the row ordering in
the obfuscated dataset induced by a given rank function, and ii) anonymity: to
protect user anonymity under fingerprinting attacks. The first objective,
rank-preservation, is of interest in applications such as the design of search
engines and recommendation systems, feature matching, and social network
analysis. Fingerprinting attacks, considered in evaluating the anonymity
objective, are privacy attacks where an attacker constructs a fingerprint of a
victim based on its observed activities, such as online web activities, and
compares this fingerprint with information extracted from a publicly released
obfuscated dataset to identify the victim. By evaluating the performance limits
of a class of obfuscation mechanisms over asymptotically large datasets, a
fundamental trade-off is quantified between rank-preservation and user
anonymity. Single-letter obfuscation mechanisms are considered, where each
entry in the dataset is perturbed by independent noise, and their fundamental
performance limits are characterized by leveraging large deviation techniques.
The optimal obfuscating test-channel, optimizing the privacy-utility tradeoff,
is characterized in the form of a convex optimization problem which can be
solved efficiently. Numerical simulations of various scenarios are provided to
verify the theoretical derivations.</description><identifier>DOI: 10.48550/arxiv.2305.07079</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Databases ; Computer Science - Information Theory ; Mathematics - Information Theory</subject><creationdate>2023-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2305.07079$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2305.07079$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shariatnasab, Mahshad</creatorcontrib><creatorcontrib>Shirani, Farhad</creatorcontrib><creatorcontrib>Iyengar, S. Sitharma</creatorcontrib><title>The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation</title><description>Dataset obfuscation refers to techniques in which random noise is added to
the entries of a given dataset, prior to its public release, to protect against
leakage of private information. In this work, dataset obfuscation under two
objectives is considered: i) rank-preservation: to preserve the row ordering in
the obfuscated dataset induced by a given rank function, and ii) anonymity: to
protect user anonymity under fingerprinting attacks. The first objective,
rank-preservation, is of interest in applications such as the design of search
engines and recommendation systems, feature matching, and social network
analysis. Fingerprinting attacks, considered in evaluating the anonymity
objective, are privacy attacks where an attacker constructs a fingerprint of a
victim based on its observed activities, such as online web activities, and
compares this fingerprint with information extracted from a publicly released
obfuscated dataset to identify the victim. By evaluating the performance limits
of a class of obfuscation mechanisms over asymptotically large datasets, a
fundamental trade-off is quantified between rank-preservation and user
anonymity. Single-letter obfuscation mechanisms are considered, where each
entry in the dataset is perturbed by independent noise, and their fundamental
performance limits are characterized by leveraging large deviation techniques.
The optimal obfuscating test-channel, optimizing the privacy-utility tradeoff,
is characterized in the form of a convex optimization problem which can be
solved efficiently. Numerical simulations of various scenarios are provided to
verify the theoretical derivations.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Databases</subject><subject>Computer Science - Information Theory</subject><subject>Mathematics - Information Theory</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUhmEvDKhwAUz4BhzsOLbjEYVfqVIrlM7RsXNOsSgpckJE7h4oTN87fdLD2JWSRVUbI28gf6W5KLU0hXTS-XPWtK_ItznNEBexm9IhTQtvM_R4JOJp4C8wvIltxhHznIY9v4MJRpz4JtDnGGFKx-GCnREcRrz83xVrH-7b5kmsN4_Pze1agHVeWAwyaiWhJ5ReU-ksRV9bMqT6n9ahVqgMBewd2VJHKq0KPoYaqkoj6BW7_rs9KbqPnN4hL92vpjtp9Dc4bEXO</recordid><startdate>20230511</startdate><enddate>20230511</enddate><creator>Shariatnasab, Mahshad</creator><creator>Shirani, Farhad</creator><creator>Iyengar, S. Sitharma</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20230511</creationdate><title>The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation</title><author>Shariatnasab, Mahshad ; Shirani, Farhad ; Iyengar, S. Sitharma</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-6eb0c310adfe093f276fc986f5f1d76f3b81e15fbed7f623cf261b9cb8a443ea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Databases</topic><topic>Computer Science - Information Theory</topic><topic>Mathematics - Information Theory</topic><toplevel>online_resources</toplevel><creatorcontrib>Shariatnasab, Mahshad</creatorcontrib><creatorcontrib>Shirani, Farhad</creatorcontrib><creatorcontrib>Iyengar, S. Sitharma</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shariatnasab, Mahshad</au><au>Shirani, Farhad</au><au>Iyengar, S. Sitharma</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation</atitle><date>2023-05-11</date><risdate>2023</risdate><abstract>Dataset obfuscation refers to techniques in which random noise is added to
the entries of a given dataset, prior to its public release, to protect against
leakage of private information. In this work, dataset obfuscation under two
objectives is considered: i) rank-preservation: to preserve the row ordering in
the obfuscated dataset induced by a given rank function, and ii) anonymity: to
protect user anonymity under fingerprinting attacks. The first objective,
rank-preservation, is of interest in applications such as the design of search
engines and recommendation systems, feature matching, and social network
analysis. Fingerprinting attacks, considered in evaluating the anonymity
objective, are privacy attacks where an attacker constructs a fingerprint of a
victim based on its observed activities, such as online web activities, and
compares this fingerprint with information extracted from a publicly released
obfuscated dataset to identify the victim. By evaluating the performance limits
of a class of obfuscation mechanisms over asymptotically large datasets, a
fundamental trade-off is quantified between rank-preservation and user
anonymity. Single-letter obfuscation mechanisms are considered, where each
entry in the dataset is perturbed by independent noise, and their fundamental
performance limits are characterized by leveraging large deviation techniques.
The optimal obfuscating test-channel, optimizing the privacy-utility tradeoff,
is characterized in the form of a convex optimization problem which can be
solved efficiently. Numerical simulations of various scenarios are provided to
verify the theoretical derivations.</abstract><doi>10.48550/arxiv.2305.07079</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2305.07079 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2305_07079 |
source | arXiv.org |
subjects | Computer Science - Cryptography and Security Computer Science - Databases Computer Science - Information Theory Mathematics - Information Theory |
title | The Privacy-Utility Tradeoff in Rank-Preserving Dataset Obfuscation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T23%3A23%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20Privacy-Utility%20Tradeoff%20in%20Rank-Preserving%20Dataset%20Obfuscation&rft.au=Shariatnasab,%20Mahshad&rft.date=2023-05-11&rft_id=info:doi/10.48550/arxiv.2305.07079&rft_dat=%3Carxiv_GOX%3E2305_07079%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |