Privacy-Aware Randomized Quantization via Linear Programming

Differential privacy mechanisms such as the Gaussian or Laplace mechanism have been widely used in data analytics for preserving individual privacy. However, they are mostly designed for continuous outputs and are unsuitable for scenarios where discrete values are necessary. Although various quantiz...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cai, Zhongteng, Zhang, Xueru, Khalili, Mohammad Mahdi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Cai, Zhongteng
Zhang, Xueru
Khalili, Mohammad Mahdi
description Differential privacy mechanisms such as the Gaussian or Laplace mechanism have been widely used in data analytics for preserving individual privacy. However, they are mostly designed for continuous outputs and are unsuitable for scenarios where discrete values are necessary. Although various quantization mechanisms were proposed recently to generate discrete outputs under differential privacy, the outcomes are either biased or have an inferior accuracy-privacy trade-off. In this paper, we propose a family of quantization mechanisms that is unbiased and differentially private. It has a high degree of freedom and we show that some existing mechanisms can be considered as special cases of ours. To find the optimal mechanism, we formulate a linear optimization that can be solved efficiently using linear programming tools. Experiments show that our proposed mechanism can attain a better privacy-accuracy trade-off compared to baselines.
doi_str_mv 10.48550/arxiv.2406.02599
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_02599</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_02599</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2406_025993</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zMwMrW05GSwCSjKLEtMrtR1LE8sSlUISsxLyc_NrEpNUQgsTcwryaxKLMnMz1Moy0xU8MnMS00sUggoyk8vSszNzcxL52FgTUvMKU7lhdLcDPJuriHOHrpge-ILijJzE4sq40H2xYPtMyasAgAtaTWs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Privacy-Aware Randomized Quantization via Linear Programming</title><source>arXiv.org</source><creator>Cai, Zhongteng ; Zhang, Xueru ; Khalili, Mohammad Mahdi</creator><creatorcontrib>Cai, Zhongteng ; Zhang, Xueru ; Khalili, Mohammad Mahdi</creatorcontrib><description>Differential privacy mechanisms such as the Gaussian or Laplace mechanism have been widely used in data analytics for preserving individual privacy. However, they are mostly designed for continuous outputs and are unsuitable for scenarios where discrete values are necessary. Although various quantization mechanisms were proposed recently to generate discrete outputs under differential privacy, the outcomes are either biased or have an inferior accuracy-privacy trade-off. In this paper, we propose a family of quantization mechanisms that is unbiased and differentially private. It has a high degree of freedom and we show that some existing mechanisms can be considered as special cases of ours. To find the optimal mechanism, we formulate a linear optimization that can be solved efficiently using linear programming tools. Experiments show that our proposed mechanism can attain a better privacy-accuracy trade-off compared to baselines.</description><identifier>DOI: 10.48550/arxiv.2406.02599</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Cryptography and Security</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.02599$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.02599$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cai, Zhongteng</creatorcontrib><creatorcontrib>Zhang, Xueru</creatorcontrib><creatorcontrib>Khalili, Mohammad Mahdi</creatorcontrib><title>Privacy-Aware Randomized Quantization via Linear Programming</title><description>Differential privacy mechanisms such as the Gaussian or Laplace mechanism have been widely used in data analytics for preserving individual privacy. However, they are mostly designed for continuous outputs and are unsuitable for scenarios where discrete values are necessary. Although various quantization mechanisms were proposed recently to generate discrete outputs under differential privacy, the outcomes are either biased or have an inferior accuracy-privacy trade-off. In this paper, we propose a family of quantization mechanisms that is unbiased and differentially private. It has a high degree of freedom and we show that some existing mechanisms can be considered as special cases of ours. To find the optimal mechanism, we formulate a linear optimization that can be solved efficiently using linear programming tools. Experiments show that our proposed mechanism can attain a better privacy-accuracy trade-off compared to baselines.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Cryptography and Security</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw0zMwMrW05GSwCSjKLEtMrtR1LE8sSlUISsxLyc_NrEpNUQgsTcwryaxKLMnMz1Moy0xU8MnMS00sUggoyk8vSszNzcxL52FgTUvMKU7lhdLcDPJuriHOHrpge-ILijJzE4sq40H2xYPtMyasAgAtaTWs</recordid><startdate>20240601</startdate><enddate>20240601</enddate><creator>Cai, Zhongteng</creator><creator>Zhang, Xueru</creator><creator>Khalili, Mohammad Mahdi</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240601</creationdate><title>Privacy-Aware Randomized Quantization via Linear Programming</title><author>Cai, Zhongteng ; Zhang, Xueru ; Khalili, Mohammad Mahdi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2406_025993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Cryptography and Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Cai, Zhongteng</creatorcontrib><creatorcontrib>Zhang, Xueru</creatorcontrib><creatorcontrib>Khalili, Mohammad Mahdi</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cai, Zhongteng</au><au>Zhang, Xueru</au><au>Khalili, Mohammad Mahdi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Privacy-Aware Randomized Quantization via Linear Programming</atitle><date>2024-06-01</date><risdate>2024</risdate><abstract>Differential privacy mechanisms such as the Gaussian or Laplace mechanism have been widely used in data analytics for preserving individual privacy. However, they are mostly designed for continuous outputs and are unsuitable for scenarios where discrete values are necessary. Although various quantization mechanisms were proposed recently to generate discrete outputs under differential privacy, the outcomes are either biased or have an inferior accuracy-privacy trade-off. In this paper, we propose a family of quantization mechanisms that is unbiased and differentially private. It has a high degree of freedom and we show that some existing mechanisms can be considered as special cases of ours. To find the optimal mechanism, we formulate a linear optimization that can be solved efficiently using linear programming tools. Experiments show that our proposed mechanism can attain a better privacy-accuracy trade-off compared to baselines.</abstract><doi>10.48550/arxiv.2406.02599</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.02599
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_02599
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Cryptography and Security
title Privacy-Aware Randomized Quantization via Linear Programming
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T16%3A34%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Privacy-Aware%20Randomized%20Quantization%20via%20Linear%20Programming&rft.au=Cai,%20Zhongteng&rft.date=2024-06-01&rft_id=info:doi/10.48550/arxiv.2406.02599&rft_dat=%3Carxiv_GOX%3E2406_02599%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true