Differential Privacy-Enabled Multi-Party Learning with Dynamic Privacy Budget Allocating Strategy

As one of the promising paradigms of decentralized machine learning, multi-party learning has attracted increasing attention, owing to its capability of preventing the privacy of participants from being directly exposed to adversaries. Multi-party learning enables participants to train their model l...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2023-02, Vol.12 (3), p.658
Hauptverfasser: Pan, Ke, Feng, Kaiyuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 3
container_start_page 658
container_title Electronics (Basel)
container_volume 12
creator Pan, Ke
Feng, Kaiyuan
description As one of the promising paradigms of decentralized machine learning, multi-party learning has attracted increasing attention, owing to its capability of preventing the privacy of participants from being directly exposed to adversaries. Multi-party learning enables participants to train their model locally without uploading private data to a server. However, recent studies have shown that adversaries may launch a series of attacks on learning models and extract private information about participants by analyzing the shared parameters. Moreover, existing privacy-preserving multi-party learning approaches consume higher total privacy budgets, which poses a considerable challenge to the compromise between privacy guarantees and model utility. To address this issue, this paper explores an adaptive differentially private multi-party learning framework, which incorporates zero-concentrated differential privacy technique into multi-party learning to get rid of privacy threats, and offers sharper quantitative results. We further design a dynamic privacy budget allocating strategy to alleviate the high accumulation of total privacy budgets and provide better privacy guarantees, without compromising the model’s utility. We inject more noise into model parameters in the early stages of model training and gradually reduce the volume of noise as the direction of gradient descent becomes more accurate. Theoretical analysis and extensive experiments on benchmark datasets validated that our approach could effectively improve the model’s performance with less privacy loss.
doi_str_mv 10.3390/electronics12030658
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2774874374</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A743139929</galeid><sourcerecordid>A743139929</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-a243b895b0cb4398f4d22f8c45328b4faaebf17999633a47eb31c744277641203</originalsourceid><addsrcrecordid>eNptUN9LwzAQDqLgmPsLfCn43JkmWds8zs1fMHGgPpdrepkZWTrTVOl_b8YUfPDu4I7j--7HR8hlRqecS3qNFlXwrTOqyxjlNJ-VJ2TEaCFTySQ7_VOfk0nXbWk0mfGS0xGBpdEaPbpgwCZrbz5BDemtg9pikzz1Nph0DT4MyQrBO-M2yZcJ78lycLAz6peR3PTNBkMyt7ZVEA6wl-Ah4Ga4IGcabIeTnzwmb3e3r4uHdPV8_7iYr1LF8yykwASvSzmrqaoFl6UWDWO6VGLGWVkLDYC1zgopZc45iAJrnqlCCFYUuTj8PSZXx7l733702IVq2_bexZVVxIiyEDzGmEyPqA1YrIzTbTxTRW8wvtM61Cb25xGZcRkliwR-JCjfdp1HXe292YEfqoxWB_2rf_Tn3zdte2c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2774874374</pqid></control><display><type>article</type><title>Differential Privacy-Enabled Multi-Party Learning with Dynamic Privacy Budget Allocating Strategy</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Pan, Ke ; Feng, Kaiyuan</creator><creatorcontrib>Pan, Ke ; Feng, Kaiyuan</creatorcontrib><description>As one of the promising paradigms of decentralized machine learning, multi-party learning has attracted increasing attention, owing to its capability of preventing the privacy of participants from being directly exposed to adversaries. Multi-party learning enables participants to train their model locally without uploading private data to a server. However, recent studies have shown that adversaries may launch a series of attacks on learning models and extract private information about participants by analyzing the shared parameters. Moreover, existing privacy-preserving multi-party learning approaches consume higher total privacy budgets, which poses a considerable challenge to the compromise between privacy guarantees and model utility. To address this issue, this paper explores an adaptive differentially private multi-party learning framework, which incorporates zero-concentrated differential privacy technique into multi-party learning to get rid of privacy threats, and offers sharper quantitative results. We further design a dynamic privacy budget allocating strategy to alleviate the high accumulation of total privacy budgets and provide better privacy guarantees, without compromising the model’s utility. We inject more noise into model parameters in the early stages of model training and gradually reduce the volume of noise as the direction of gradient descent becomes more accurate. Theoretical analysis and extensive experiments on benchmark datasets validated that our approach could effectively improve the model’s performance with less privacy loss.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics12030658</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Algorithms ; Budgets ; Communication ; Data security ; Datasets ; Deep learning ; Guarantees ; Machine learning ; Mathematical models ; Methods ; Normal distribution ; Parameters ; Privacy</subject><ispartof>Electronics (Basel), 2023-02, Vol.12 (3), p.658</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c361t-a243b895b0cb4398f4d22f8c45328b4faaebf17999633a47eb31c744277641203</citedby><cites>FETCH-LOGICAL-c361t-a243b895b0cb4398f4d22f8c45328b4faaebf17999633a47eb31c744277641203</cites><orcidid>0000-0003-4970-4175 ; 0000-0003-1215-558X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Pan, Ke</creatorcontrib><creatorcontrib>Feng, Kaiyuan</creatorcontrib><title>Differential Privacy-Enabled Multi-Party Learning with Dynamic Privacy Budget Allocating Strategy</title><title>Electronics (Basel)</title><description>As one of the promising paradigms of decentralized machine learning, multi-party learning has attracted increasing attention, owing to its capability of preventing the privacy of participants from being directly exposed to adversaries. Multi-party learning enables participants to train their model locally without uploading private data to a server. However, recent studies have shown that adversaries may launch a series of attacks on learning models and extract private information about participants by analyzing the shared parameters. Moreover, existing privacy-preserving multi-party learning approaches consume higher total privacy budgets, which poses a considerable challenge to the compromise between privacy guarantees and model utility. To address this issue, this paper explores an adaptive differentially private multi-party learning framework, which incorporates zero-concentrated differential privacy technique into multi-party learning to get rid of privacy threats, and offers sharper quantitative results. We further design a dynamic privacy budget allocating strategy to alleviate the high accumulation of total privacy budgets and provide better privacy guarantees, without compromising the model’s utility. We inject more noise into model parameters in the early stages of model training and gradually reduce the volume of noise as the direction of gradient descent becomes more accurate. Theoretical analysis and extensive experiments on benchmark datasets validated that our approach could effectively improve the model’s performance with less privacy loss.</description><subject>Algorithms</subject><subject>Budgets</subject><subject>Communication</subject><subject>Data security</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Guarantees</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Methods</subject><subject>Normal distribution</subject><subject>Parameters</subject><subject>Privacy</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNptUN9LwzAQDqLgmPsLfCn43JkmWds8zs1fMHGgPpdrepkZWTrTVOl_b8YUfPDu4I7j--7HR8hlRqecS3qNFlXwrTOqyxjlNJ-VJ2TEaCFTySQ7_VOfk0nXbWk0mfGS0xGBpdEaPbpgwCZrbz5BDemtg9pikzz1Nph0DT4MyQrBO-M2yZcJ78lycLAz6peR3PTNBkMyt7ZVEA6wl-Ah4Ga4IGcabIeTnzwmb3e3r4uHdPV8_7iYr1LF8yykwASvSzmrqaoFl6UWDWO6VGLGWVkLDYC1zgopZc45iAJrnqlCCFYUuTj8PSZXx7l733702IVq2_bexZVVxIiyEDzGmEyPqA1YrIzTbTxTRW8wvtM61Cb25xGZcRkliwR-JCjfdp1HXe292YEfqoxWB_2rf_Tn3zdte2c</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Pan, Ke</creator><creator>Feng, Kaiyuan</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0003-4970-4175</orcidid><orcidid>https://orcid.org/0000-0003-1215-558X</orcidid></search><sort><creationdate>20230201</creationdate><title>Differential Privacy-Enabled Multi-Party Learning with Dynamic Privacy Budget Allocating Strategy</title><author>Pan, Ke ; Feng, Kaiyuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-a243b895b0cb4398f4d22f8c45328b4faaebf17999633a47eb31c744277641203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Budgets</topic><topic>Communication</topic><topic>Data security</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Guarantees</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Methods</topic><topic>Normal distribution</topic><topic>Parameters</topic><topic>Privacy</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pan, Ke</creatorcontrib><creatorcontrib>Feng, Kaiyuan</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pan, Ke</au><au>Feng, Kaiyuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Differential Privacy-Enabled Multi-Party Learning with Dynamic Privacy Budget Allocating Strategy</atitle><jtitle>Electronics (Basel)</jtitle><date>2023-02-01</date><risdate>2023</risdate><volume>12</volume><issue>3</issue><spage>658</spage><pages>658-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>As one of the promising paradigms of decentralized machine learning, multi-party learning has attracted increasing attention, owing to its capability of preventing the privacy of participants from being directly exposed to adversaries. Multi-party learning enables participants to train their model locally without uploading private data to a server. However, recent studies have shown that adversaries may launch a series of attacks on learning models and extract private information about participants by analyzing the shared parameters. Moreover, existing privacy-preserving multi-party learning approaches consume higher total privacy budgets, which poses a considerable challenge to the compromise between privacy guarantees and model utility. To address this issue, this paper explores an adaptive differentially private multi-party learning framework, which incorporates zero-concentrated differential privacy technique into multi-party learning to get rid of privacy threats, and offers sharper quantitative results. We further design a dynamic privacy budget allocating strategy to alleviate the high accumulation of total privacy budgets and provide better privacy guarantees, without compromising the model’s utility. We inject more noise into model parameters in the early stages of model training and gradually reduce the volume of noise as the direction of gradient descent becomes more accurate. Theoretical analysis and extensive experiments on benchmark datasets validated that our approach could effectively improve the model’s performance with less privacy loss.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics12030658</doi><orcidid>https://orcid.org/0000-0003-4970-4175</orcidid><orcidid>https://orcid.org/0000-0003-1215-558X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2023-02, Vol.12 (3), p.658
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_2774874374
source MDPI - Multidisciplinary Digital Publishing Institute; EZB-FREE-00999 freely available EZB journals
subjects Algorithms
Budgets
Communication
Data security
Datasets
Deep learning
Guarantees
Machine learning
Mathematical models
Methods
Normal distribution
Parameters
Privacy
title Differential Privacy-Enabled Multi-Party Learning with Dynamic Privacy Budget Allocating Strategy
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T14%3A50%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Differential%20Privacy-Enabled%20Multi-Party%20Learning%20with%20Dynamic%20Privacy%20Budget%20Allocating%20Strategy&rft.jtitle=Electronics%20(Basel)&rft.au=Pan,%20Ke&rft.date=2023-02-01&rft.volume=12&rft.issue=3&rft.spage=658&rft.pages=658-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics12030658&rft_dat=%3Cgale_proqu%3EA743139929%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2774874374&rft_id=info:pmid/&rft_galeid=A743139929&rfr_iscdi=true