Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions
Conventionally, in a differentially private additive noise mechanism, independent and identically distributed (i.i.d.) noise samples are added to each coordinate of the response. In this work, we formally present the addition of noise that is independent but not identically distributed (i.n.i.d.) ac...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Muthukrishnan, Gokularam Kalyani, Sheetal |
description | Conventionally, in a differentially private additive noise mechanism,
independent and identically distributed (i.i.d.) noise samples are added to
each coordinate of the response. In this work, we formally present the addition
of noise that is independent but not identically distributed (i.n.i.d.) across
the coordinates to achieve tighter privacy-accuracy trade-off by exploiting
coordinate-wise disparity in privacy leakage. In particular, we study the
i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which
these mechanisms guarantee privacy. The optimal choice of parameters that
ensure these conditions are derived considering (weighted) mean squared and
$\ell_{p}^{p}$-errors as measures of accuracy. Theoretical analyses and
numerical simulations demonstrate that the i.n.i.d. mechanisms achieve higher
utility for the given privacy requirements compared to their i.i.d.
counterparts. One of the interesting observations is that the Laplace mechanism
outperforms Gaussian even in high dimensions, as opposed to the popular belief,
if the irregularity in coordinate-wise sensitivities is exploited. We also
demonstrate how the i.n.i.d. noise can improve the performance in private (a)
coordinate descent, (b) principal component analysis, and (c) deep learning
with group clipping. |
doi_str_mv | 10.48550/arxiv.2302.03511 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2302_03511</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2302_03511</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2302_035113</originalsourceid><addsrcrecordid>eNqFjj1ug0AUhLdJETk-QKq8CxiDMZKVMviHIpFSxDV6Jg8YaVnQ7hrDDXzsEJQ-1Wikb0afUs9RGGx3SRKu2Q7og00cboIwTqLoUd33KEuxYjxY06dFz8VIN_iaMlS1WDp7aPiRLiMdhk638DAVpW1rv2HYy-oGJ7SH69hO3Cu9c6e5EPqQomYD11DKht6EPZ346hymBjPfT7NGjENr3JN6KFk7Wf7lQr0cD19ptpqd886iYTvmv-757B7_T_wADn5Q_w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions</title><source>arXiv.org</source><creator>Muthukrishnan, Gokularam ; Kalyani, Sheetal</creator><creatorcontrib>Muthukrishnan, Gokularam ; Kalyani, Sheetal</creatorcontrib><description>Conventionally, in a differentially private additive noise mechanism,
independent and identically distributed (i.i.d.) noise samples are added to
each coordinate of the response. In this work, we formally present the addition
of noise that is independent but not identically distributed (i.n.i.d.) across
the coordinates to achieve tighter privacy-accuracy trade-off by exploiting
coordinate-wise disparity in privacy leakage. In particular, we study the
i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which
these mechanisms guarantee privacy. The optimal choice of parameters that
ensure these conditions are derived considering (weighted) mean squared and
$\ell_{p}^{p}$-errors as measures of accuracy. Theoretical analyses and
numerical simulations demonstrate that the i.n.i.d. mechanisms achieve higher
utility for the given privacy requirements compared to their i.i.d.
counterparts. One of the interesting observations is that the Laplace mechanism
outperforms Gaussian even in high dimensions, as opposed to the popular belief,
if the irregularity in coordinate-wise sensitivities is exploited. We also
demonstrate how the i.n.i.d. noise can improve the performance in private (a)
coordinate descent, (b) principal component analysis, and (c) deep learning
with group clipping.</description><identifier>DOI: 10.48550/arxiv.2302.03511</identifier><language>eng</language><subject>Computer Science - Cryptography and Security</subject><creationdate>2023-02</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2302.03511$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2302.03511$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Muthukrishnan, Gokularam</creatorcontrib><creatorcontrib>Kalyani, Sheetal</creatorcontrib><title>Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions</title><description>Conventionally, in a differentially private additive noise mechanism,
independent and identically distributed (i.i.d.) noise samples are added to
each coordinate of the response. In this work, we formally present the addition
of noise that is independent but not identically distributed (i.n.i.d.) across
the coordinates to achieve tighter privacy-accuracy trade-off by exploiting
coordinate-wise disparity in privacy leakage. In particular, we study the
i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which
these mechanisms guarantee privacy. The optimal choice of parameters that
ensure these conditions are derived considering (weighted) mean squared and
$\ell_{p}^{p}$-errors as measures of accuracy. Theoretical analyses and
numerical simulations demonstrate that the i.n.i.d. mechanisms achieve higher
utility for the given privacy requirements compared to their i.i.d.
counterparts. One of the interesting observations is that the Laplace mechanism
outperforms Gaussian even in high dimensions, as opposed to the popular belief,
if the irregularity in coordinate-wise sensitivities is exploited. We also
demonstrate how the i.n.i.d. noise can improve the performance in private (a)
coordinate descent, (b) principal component analysis, and (c) deep learning
with group clipping.</description><subject>Computer Science - Cryptography and Security</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjj1ug0AUhLdJETk-QKq8CxiDMZKVMviHIpFSxDV6Jg8YaVnQ7hrDDXzsEJQ-1Wikb0afUs9RGGx3SRKu2Q7og00cboIwTqLoUd33KEuxYjxY06dFz8VIN_iaMlS1WDp7aPiRLiMdhk638DAVpW1rv2HYy-oGJ7SH69hO3Cu9c6e5EPqQomYD11DKht6EPZ346hymBjPfT7NGjENr3JN6KFk7Wf7lQr0cD19ptpqd886iYTvmv-757B7_T_wADn5Q_w</recordid><startdate>20230207</startdate><enddate>20230207</enddate><creator>Muthukrishnan, Gokularam</creator><creator>Kalyani, Sheetal</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230207</creationdate><title>Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions</title><author>Muthukrishnan, Gokularam ; Kalyani, Sheetal</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2302_035113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Cryptography and Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Muthukrishnan, Gokularam</creatorcontrib><creatorcontrib>Kalyani, Sheetal</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Muthukrishnan, Gokularam</au><au>Kalyani, Sheetal</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions</atitle><date>2023-02-07</date><risdate>2023</risdate><abstract>Conventionally, in a differentially private additive noise mechanism,
independent and identically distributed (i.i.d.) noise samples are added to
each coordinate of the response. In this work, we formally present the addition
of noise that is independent but not identically distributed (i.n.i.d.) across
the coordinates to achieve tighter privacy-accuracy trade-off by exploiting
coordinate-wise disparity in privacy leakage. In particular, we study the
i.n.i.d. Gaussian and Laplace mechanisms and obtain the conditions under which
these mechanisms guarantee privacy. The optimal choice of parameters that
ensure these conditions are derived considering (weighted) mean squared and
$\ell_{p}^{p}$-errors as measures of accuracy. Theoretical analyses and
numerical simulations demonstrate that the i.n.i.d. mechanisms achieve higher
utility for the given privacy requirements compared to their i.i.d.
counterparts. One of the interesting observations is that the Laplace mechanism
outperforms Gaussian even in high dimensions, as opposed to the popular belief,
if the irregularity in coordinate-wise sensitivities is exploited. We also
demonstrate how the i.n.i.d. noise can improve the performance in private (a)
coordinate descent, (b) principal component analysis, and (c) deep learning
with group clipping.</abstract><doi>10.48550/arxiv.2302.03511</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2302.03511 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2302_03511 |
source | arXiv.org |
subjects | Computer Science - Cryptography and Security |
title | Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T22%3A41%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Differential%20Privacy%20with%20Higher%20Utility%20by%20Exploiting%20Coordinate-wise%20Disparity:%20Laplace%20Mechanism%20Can%20Beat%20Gaussian%20in%20High%20Dimensions&rft.au=Muthukrishnan,%20Gokularam&rft.date=2023-02-07&rft_id=info:doi/10.48550/arxiv.2302.03511&rft_dat=%3Carxiv_GOX%3E2302_03511%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |