Compressive learning with privacy guarantees

This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch vector, from which the lear...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information and Inference: A Journal of the IMA 2022-03, Vol.11 (1), p.251-305
Hauptverfasser: Chatalic, A, Schellekens, V, Houssiau, F, de Montjoye, Y A, Jacques, L, Gribonval, R
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 305
container_issue 1
container_start_page 251
container_title Information and Inference: A Journal of the IMA
container_volume 11
creator Chatalic, A
Schellekens, V
Houssiau, F
de Montjoye, Y A
Jacques, L
Gribonval, R
description This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch vector, from which the learning task is then performed. We provide sharp bounds on the so-called sensitivity of this sketching mechanism. This allows us to leverage standard techniques to ensure differential privacy—a well-established formalism for defining and quantifying the privacy of a random mechanism—by adding Laplace of Gaussian noise to the sketch. We combine these standard mechanisms with a new feature subsampling mechanism, which reduces the computational cost without damaging privacy. The overall framework is applied to the tasks of Gaussian modeling, k-means clustering and principal component analysis, for which sharp privacy bounds are derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by our mechanism is strongly related with the induced noise level, for which we give analytical expressions.
doi_str_mv 10.1093/imaiai/iaab005
format Article
fullrecord <record><control><sourceid>hal_cross</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_02496896v2</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>oai_HAL_hal_02496896v2</sourcerecordid><originalsourceid>FETCH-LOGICAL-c347t-810e9973e49cb079ed55481c48de006229869f894b8d47d981b7eae9a28ba7b23</originalsourceid><addsrcrecordid>eNpNkEFLw0AQhRdRsNRePecqmHZ2s83uHEtQKxS86HmZTSbtQpKWTaz039vQIp5m5vHeY_iEeJQwl4DZIrQUKCwCkQdY3oiJAo2pNUbd_tvvxazvgweppc7P90Q8F_v2EPmsHjlpmGIXum3yE4ZdcojhSOUp2X5TpG5g7h_EXU1Nz7PrnIqv15fPYp1uPt7ei9UmLTNthtRKYESTscbSg0GulkttZaltxQC5UmhzrC1qbyttKrTSGyZGUtaT8SqbiqdL744ad36jpXhyewpuvdq4UQOlMbeYH0fv_OIt477vI9d_AQluROMuaNwVTfYLlqdYLQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Compressive learning with privacy guarantees</title><source>Oxford University Press Journals All Titles (1996-Current)</source><creator>Chatalic, A ; Schellekens, V ; Houssiau, F ; de Montjoye, Y A ; Jacques, L ; Gribonval, R</creator><creatorcontrib>Chatalic, A ; Schellekens, V ; Houssiau, F ; de Montjoye, Y A ; Jacques, L ; Gribonval, R</creatorcontrib><description>This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch vector, from which the learning task is then performed. We provide sharp bounds on the so-called sensitivity of this sketching mechanism. This allows us to leverage standard techniques to ensure differential privacy—a well-established formalism for defining and quantifying the privacy of a random mechanism—by adding Laplace of Gaussian noise to the sketch. We combine these standard mechanisms with a new feature subsampling mechanism, which reduces the computational cost without damaging privacy. The overall framework is applied to the tasks of Gaussian modeling, k-means clustering and principal component analysis, for which sharp privacy bounds are derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by our mechanism is strongly related with the induced noise level, for which we give analytical expressions.</description><identifier>ISSN: 2049-8772</identifier><identifier>ISSN: 2049-8764</identifier><identifier>EISSN: 2049-8772</identifier><identifier>DOI: 10.1093/imaiai/iaab005</identifier><language>eng</language><publisher>Oxford University Press (OUP)</publisher><subject>Computer Science ; Cryptography and Security ; Machine Learning ; Signal and Image Processing ; Statistics</subject><ispartof>Information and Inference: A Journal of the IMA, 2022-03, Vol.11 (1), p.251-305</ispartof><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c347t-810e9973e49cb079ed55481c48de006229869f894b8d47d981b7eae9a28ba7b23</citedby><cites>FETCH-LOGICAL-c347t-810e9973e49cb079ed55481c48de006229869f894b8d47d981b7eae9a28ba7b23</cites><orcidid>0000-0003-2574-2417 ; 0000-0002-6261-0328 ; 0000-0002-9450-8125</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,881,27901,27902</link.rule.ids><backlink>$$Uhttps://inria.hal.science/hal-02496896$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Chatalic, A</creatorcontrib><creatorcontrib>Schellekens, V</creatorcontrib><creatorcontrib>Houssiau, F</creatorcontrib><creatorcontrib>de Montjoye, Y A</creatorcontrib><creatorcontrib>Jacques, L</creatorcontrib><creatorcontrib>Gribonval, R</creatorcontrib><title>Compressive learning with privacy guarantees</title><title>Information and Inference: A Journal of the IMA</title><description>This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch vector, from which the learning task is then performed. We provide sharp bounds on the so-called sensitivity of this sketching mechanism. This allows us to leverage standard techniques to ensure differential privacy—a well-established formalism for defining and quantifying the privacy of a random mechanism—by adding Laplace of Gaussian noise to the sketch. We combine these standard mechanisms with a new feature subsampling mechanism, which reduces the computational cost without damaging privacy. The overall framework is applied to the tasks of Gaussian modeling, k-means clustering and principal component analysis, for which sharp privacy bounds are derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by our mechanism is strongly related with the induced noise level, for which we give analytical expressions.</description><subject>Computer Science</subject><subject>Cryptography and Security</subject><subject>Machine Learning</subject><subject>Signal and Image Processing</subject><subject>Statistics</subject><issn>2049-8772</issn><issn>2049-8764</issn><issn>2049-8772</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpNkEFLw0AQhRdRsNRePecqmHZ2s83uHEtQKxS86HmZTSbtQpKWTaz039vQIp5m5vHeY_iEeJQwl4DZIrQUKCwCkQdY3oiJAo2pNUbd_tvvxazvgweppc7P90Q8F_v2EPmsHjlpmGIXum3yE4ZdcojhSOUp2X5TpG5g7h_EXU1Nz7PrnIqv15fPYp1uPt7ei9UmLTNthtRKYESTscbSg0GulkttZaltxQC5UmhzrC1qbyttKrTSGyZGUtaT8SqbiqdL744ad36jpXhyewpuvdq4UQOlMbeYH0fv_OIt477vI9d_AQluROMuaNwVTfYLlqdYLQ</recordid><startdate>20220326</startdate><enddate>20220326</enddate><creator>Chatalic, A</creator><creator>Schellekens, V</creator><creator>Houssiau, F</creator><creator>de Montjoye, Y A</creator><creator>Jacques, L</creator><creator>Gribonval, R</creator><general>Oxford University Press (OUP)</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><scope>VOOES</scope><orcidid>https://orcid.org/0000-0003-2574-2417</orcidid><orcidid>https://orcid.org/0000-0002-6261-0328</orcidid><orcidid>https://orcid.org/0000-0002-9450-8125</orcidid></search><sort><creationdate>20220326</creationdate><title>Compressive learning with privacy guarantees</title><author>Chatalic, A ; Schellekens, V ; Houssiau, F ; de Montjoye, Y A ; Jacques, L ; Gribonval, R</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c347t-810e9973e49cb079ed55481c48de006229869f894b8d47d981b7eae9a28ba7b23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science</topic><topic>Cryptography and Security</topic><topic>Machine Learning</topic><topic>Signal and Image Processing</topic><topic>Statistics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chatalic, A</creatorcontrib><creatorcontrib>Schellekens, V</creatorcontrib><creatorcontrib>Houssiau, F</creatorcontrib><creatorcontrib>de Montjoye, Y A</creatorcontrib><creatorcontrib>Jacques, L</creatorcontrib><creatorcontrib>Gribonval, R</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><jtitle>Information and Inference: A Journal of the IMA</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chatalic, A</au><au>Schellekens, V</au><au>Houssiau, F</au><au>de Montjoye, Y A</au><au>Jacques, L</au><au>Gribonval, R</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Compressive learning with privacy guarantees</atitle><jtitle>Information and Inference: A Journal of the IMA</jtitle><date>2022-03-26</date><risdate>2022</risdate><volume>11</volume><issue>1</issue><spage>251</spage><epage>305</epage><pages>251-305</pages><issn>2049-8772</issn><issn>2049-8764</issn><eissn>2049-8772</eissn><abstract>This work addresses the problem of learning from large collections of data with privacy guarantees. The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch vector, from which the learning task is then performed. We provide sharp bounds on the so-called sensitivity of this sketching mechanism. This allows us to leverage standard techniques to ensure differential privacy—a well-established formalism for defining and quantifying the privacy of a random mechanism—by adding Laplace of Gaussian noise to the sketch. We combine these standard mechanisms with a new feature subsampling mechanism, which reduces the computational cost without damaging privacy. The overall framework is applied to the tasks of Gaussian modeling, k-means clustering and principal component analysis, for which sharp privacy bounds are derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by our mechanism is strongly related with the induced noise level, for which we give analytical expressions.</abstract><pub>Oxford University Press (OUP)</pub><doi>10.1093/imaiai/iaab005</doi><tpages>55</tpages><orcidid>https://orcid.org/0000-0003-2574-2417</orcidid><orcidid>https://orcid.org/0000-0002-6261-0328</orcidid><orcidid>https://orcid.org/0000-0002-9450-8125</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2049-8772
ispartof Information and Inference: A Journal of the IMA, 2022-03, Vol.11 (1), p.251-305
issn 2049-8772
2049-8764
2049-8772
language eng
recordid cdi_hal_primary_oai_HAL_hal_02496896v2
source Oxford University Press Journals All Titles (1996-Current)
subjects Computer Science
Cryptography and Security
Machine Learning
Signal and Image Processing
Statistics
title Compressive learning with privacy guarantees
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T21%3A01%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Compressive%20learning%20with%20privacy%20guarantees&rft.jtitle=Information%20and%20Inference:%20A%20Journal%20of%20the%20IMA&rft.au=Chatalic,%20A&rft.date=2022-03-26&rft.volume=11&rft.issue=1&rft.spage=251&rft.epage=305&rft.pages=251-305&rft.issn=2049-8772&rft.eissn=2049-8772&rft_id=info:doi/10.1093/imaiai/iaab005&rft_dat=%3Chal_cross%3Eoai_HAL_hal_02496896v2%3C/hal_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true