Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning
Previous state-of-the-art studies have demonstrated that adversaries can access sensitive user data by membership inference attacks (MIAs) in Federated Learning (FL). Introducing differential privacy (DP) into the FL framework is an effective way to enhance the privacy of FL. Nevertheless, in differ...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on computers 2025-01, Vol.74 (1), p.278-292 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 292 |
---|---|
container_issue | 1 |
container_start_page | 278 |
container_title | IEEE transactions on computers |
container_volume | 74 |
creator | Zhang, Benteng Mao, Yingchi He, Xiaoming Huang, Huawei Wu, Jie |
description | Previous state-of-the-art studies have demonstrated that adversaries can access sensitive user data by membership inference attacks (MIAs) in Federated Learning (FL). Introducing differential privacy (DP) into the FL framework is an effective way to enhance the privacy of FL. Nevertheless, in differentially private federated learning (DP-FL), local gradients become excessively sparse in certain training rounds. Especially when training with low privacy budgets, there is a risk of introducing excessive noise into clients' gradients. This issue can lead to a significant degradation in the accuracy of the global model. Thus, how to balance the user's privacy and global model accuracy becomes a challenge in DP-FL. To this end, we propose an approach, known as differential privacy federated aggregation, based on significant gradient protection (DP-FedASGP). DP-FedASGP can mitigate excessive noises by protecting significant gradients and accelerate the convergence of the global model by calculating dynamic aggregation weights for gradients. Experimental results show that DP-FedASGP achieves comparable privacy protection effects to DP-FedAvg and cpSGD (communication-private SGD based on gradient quantization) but outperforms DP-FedSNLC (sparse noise based on clipping losses and privacy budget costs) and FedSMP (sparsified model perturbation). Furthermore, the average global test accuracy of DP-FedASGP across four datasets and three models is about 2.62 2.62 %, 4.71 4.71 %, 0.45 0.45 %, and 0.19 0.19 % higher than the above methods, respectively. These improvements indicate that DP-FedASGP is a promising approach for balancing the privacy and accuracy of DP-FL. |
doi_str_mv | 10.1109/TC.2024.3477971 |
format | Article |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10713222</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10713222</ieee_id><sourcerecordid>10_1109_TC_2024_3477971</sourcerecordid><originalsourceid>FETCH-LOGICAL-c612-3a504f0f5d789bc705e07871fe23fbdd5a2ee64ce8901637dc9e8c71f65f6b8f3</originalsourceid><addsrcrecordid>eNpNkLFOwzAQhi0EEqUwszD4BdKe7diOxxLRglSJSoSFJXLtc2VUXOQEpL49idqB6e70_f8NHyH3DGaMgZk39YwDL2ei1NpodkEmTEpdGCPVJZkAsKowooRrctN1nwCgOJgJ-Xi0e5tcTDu6yfHXuiO1ydOFcz95PN67Eb3FXYohOpt6usrWRxyWTT706Pp4SDQmukSP2fbo6RptTkPrllwFu-_w7jynpFk-NfVzsX5dvdSLdeEU44WwEsoAQXpdma3TIBF0pVlALsLWe2k5oiodVgaYEto7g5UbuJJBbasgpmR-euvyoesyhvY7xy-bjy2DdjTTNnU7mmnPZobGw6kREfFfekCcc_EH5tpgPg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning</title><source>IEEE Xplore</source><creator>Zhang, Benteng ; Mao, Yingchi ; He, Xiaoming ; Huang, Huawei ; Wu, Jie</creator><creatorcontrib>Zhang, Benteng ; Mao, Yingchi ; He, Xiaoming ; Huang, Huawei ; Wu, Jie</creatorcontrib><description><![CDATA[Previous state-of-the-art studies have demonstrated that adversaries can access sensitive user data by membership inference attacks (MIAs) in Federated Learning (FL). Introducing differential privacy (DP) into the FL framework is an effective way to enhance the privacy of FL. Nevertheless, in differentially private federated learning (DP-FL), local gradients become excessively sparse in certain training rounds. Especially when training with low privacy budgets, there is a risk of introducing excessive noise into clients' gradients. This issue can lead to a significant degradation in the accuracy of the global model. Thus, how to balance the user's privacy and global model accuracy becomes a challenge in DP-FL. To this end, we propose an approach, known as differential privacy federated aggregation, based on significant gradient protection (DP-FedASGP). DP-FedASGP can mitigate excessive noises by protecting significant gradients and accelerate the convergence of the global model by calculating dynamic aggregation weights for gradients. Experimental results show that DP-FedASGP achieves comparable privacy protection effects to DP-FedAvg and cpSGD (communication-private SGD based on gradient quantization) but outperforms DP-FedSNLC (sparse noise based on clipping losses and privacy budget costs) and FedSMP (sparsified model perturbation). Furthermore, the average global test accuracy of DP-FedASGP across four datasets and three models is about <inline-formula><tex-math notation="LaTeX">2.62</tex-math> <mml:math><mml:mn>2.62</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq1-3477971.gif"/> </inline-formula>%, <inline-formula><tex-math notation="LaTeX">4.71</tex-math> <mml:math><mml:mn>4.71</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq2-3477971.gif"/> </inline-formula>%, <inline-formula><tex-math notation="LaTeX">0.45</tex-math> <mml:math><mml:mn>0.45</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq3-3477971.gif"/> </inline-formula>%, and <inline-formula><tex-math notation="LaTeX">0.19</tex-math> <mml:math><mml:mn>0.19</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq4-3477971.gif"/> </inline-formula>% higher than the above methods, respectively. These improvements indicate that DP-FedASGP is a promising approach for balancing the privacy and accuracy of DP-FL.]]></description><identifier>ISSN: 0018-9340</identifier><identifier>EISSN: 1557-9956</identifier><identifier>DOI: 10.1109/TC.2024.3477971</identifier><identifier>CODEN: ITCOB4</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Data models ; Differential privacy ; Federated learning ; Noise ; Perturbation methods ; Privacy ; Protection ; Servers ; significant gradient protection ; Training</subject><ispartof>IEEE transactions on computers, 2025-01, Vol.74 (1), p.278-292</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0002-9884-8100 ; 0000-0002-3472-1717 ; 0009-0006-6946-5254 ; 0000-0002-7035-6446 ; 0000-0003-4196-3041</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10713222$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10713222$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Benteng</creatorcontrib><creatorcontrib>Mao, Yingchi</creatorcontrib><creatorcontrib>He, Xiaoming</creatorcontrib><creatorcontrib>Huang, Huawei</creatorcontrib><creatorcontrib>Wu, Jie</creatorcontrib><title>Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning</title><title>IEEE transactions on computers</title><addtitle>TC</addtitle><description><![CDATA[Previous state-of-the-art studies have demonstrated that adversaries can access sensitive user data by membership inference attacks (MIAs) in Federated Learning (FL). Introducing differential privacy (DP) into the FL framework is an effective way to enhance the privacy of FL. Nevertheless, in differentially private federated learning (DP-FL), local gradients become excessively sparse in certain training rounds. Especially when training with low privacy budgets, there is a risk of introducing excessive noise into clients' gradients. This issue can lead to a significant degradation in the accuracy of the global model. Thus, how to balance the user's privacy and global model accuracy becomes a challenge in DP-FL. To this end, we propose an approach, known as differential privacy federated aggregation, based on significant gradient protection (DP-FedASGP). DP-FedASGP can mitigate excessive noises by protecting significant gradients and accelerate the convergence of the global model by calculating dynamic aggregation weights for gradients. Experimental results show that DP-FedASGP achieves comparable privacy protection effects to DP-FedAvg and cpSGD (communication-private SGD based on gradient quantization) but outperforms DP-FedSNLC (sparse noise based on clipping losses and privacy budget costs) and FedSMP (sparsified model perturbation). Furthermore, the average global test accuracy of DP-FedASGP across four datasets and three models is about <inline-formula><tex-math notation="LaTeX">2.62</tex-math> <mml:math><mml:mn>2.62</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq1-3477971.gif"/> </inline-formula>%, <inline-formula><tex-math notation="LaTeX">4.71</tex-math> <mml:math><mml:mn>4.71</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq2-3477971.gif"/> </inline-formula>%, <inline-formula><tex-math notation="LaTeX">0.45</tex-math> <mml:math><mml:mn>0.45</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq3-3477971.gif"/> </inline-formula>%, and <inline-formula><tex-math notation="LaTeX">0.19</tex-math> <mml:math><mml:mn>0.19</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq4-3477971.gif"/> </inline-formula>% higher than the above methods, respectively. These improvements indicate that DP-FedASGP is a promising approach for balancing the privacy and accuracy of DP-FL.]]></description><subject>Accuracy</subject><subject>Data models</subject><subject>Differential privacy</subject><subject>Federated learning</subject><subject>Noise</subject><subject>Perturbation methods</subject><subject>Privacy</subject><subject>Protection</subject><subject>Servers</subject><subject>significant gradient protection</subject><subject>Training</subject><issn>0018-9340</issn><issn>1557-9956</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2025</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkLFOwzAQhi0EEqUwszD4BdKe7diOxxLRglSJSoSFJXLtc2VUXOQEpL49idqB6e70_f8NHyH3DGaMgZk39YwDL2ei1NpodkEmTEpdGCPVJZkAsKowooRrctN1nwCgOJgJ-Xi0e5tcTDu6yfHXuiO1ydOFcz95PN67Eb3FXYohOpt6usrWRxyWTT706Pp4SDQmukSP2fbo6RptTkPrllwFu-_w7jynpFk-NfVzsX5dvdSLdeEU44WwEsoAQXpdma3TIBF0pVlALsLWe2k5oiodVgaYEto7g5UbuJJBbasgpmR-euvyoesyhvY7xy-bjy2DdjTTNnU7mmnPZobGw6kREfFfekCcc_EH5tpgPg</recordid><startdate>202501</startdate><enddate>202501</enddate><creator>Zhang, Benteng</creator><creator>Mao, Yingchi</creator><creator>He, Xiaoming</creator><creator>Huang, Huawei</creator><creator>Wu, Jie</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-9884-8100</orcidid><orcidid>https://orcid.org/0000-0002-3472-1717</orcidid><orcidid>https://orcid.org/0009-0006-6946-5254</orcidid><orcidid>https://orcid.org/0000-0002-7035-6446</orcidid><orcidid>https://orcid.org/0000-0003-4196-3041</orcidid></search><sort><creationdate>202501</creationdate><title>Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning</title><author>Zhang, Benteng ; Mao, Yingchi ; He, Xiaoming ; Huang, Huawei ; Wu, Jie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c612-3a504f0f5d789bc705e07871fe23fbdd5a2ee64ce8901637dc9e8c71f65f6b8f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2025</creationdate><topic>Accuracy</topic><topic>Data models</topic><topic>Differential privacy</topic><topic>Federated learning</topic><topic>Noise</topic><topic>Perturbation methods</topic><topic>Privacy</topic><topic>Protection</topic><topic>Servers</topic><topic>significant gradient protection</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Benteng</creatorcontrib><creatorcontrib>Mao, Yingchi</creatorcontrib><creatorcontrib>He, Xiaoming</creatorcontrib><creatorcontrib>Huang, Huawei</creatorcontrib><creatorcontrib>Wu, Jie</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><jtitle>IEEE transactions on computers</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Benteng</au><au>Mao, Yingchi</au><au>He, Xiaoming</au><au>Huang, Huawei</au><au>Wu, Jie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning</atitle><jtitle>IEEE transactions on computers</jtitle><stitle>TC</stitle><date>2025-01</date><risdate>2025</risdate><volume>74</volume><issue>1</issue><spage>278</spage><epage>292</epage><pages>278-292</pages><issn>0018-9340</issn><eissn>1557-9956</eissn><coden>ITCOB4</coden><abstract><![CDATA[Previous state-of-the-art studies have demonstrated that adversaries can access sensitive user data by membership inference attacks (MIAs) in Federated Learning (FL). Introducing differential privacy (DP) into the FL framework is an effective way to enhance the privacy of FL. Nevertheless, in differentially private federated learning (DP-FL), local gradients become excessively sparse in certain training rounds. Especially when training with low privacy budgets, there is a risk of introducing excessive noise into clients' gradients. This issue can lead to a significant degradation in the accuracy of the global model. Thus, how to balance the user's privacy and global model accuracy becomes a challenge in DP-FL. To this end, we propose an approach, known as differential privacy federated aggregation, based on significant gradient protection (DP-FedASGP). DP-FedASGP can mitigate excessive noises by protecting significant gradients and accelerate the convergence of the global model by calculating dynamic aggregation weights for gradients. Experimental results show that DP-FedASGP achieves comparable privacy protection effects to DP-FedAvg and cpSGD (communication-private SGD based on gradient quantization) but outperforms DP-FedSNLC (sparse noise based on clipping losses and privacy budget costs) and FedSMP (sparsified model perturbation). Furthermore, the average global test accuracy of DP-FedASGP across four datasets and three models is about <inline-formula><tex-math notation="LaTeX">2.62</tex-math> <mml:math><mml:mn>2.62</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq1-3477971.gif"/> </inline-formula>%, <inline-formula><tex-math notation="LaTeX">4.71</tex-math> <mml:math><mml:mn>4.71</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq2-3477971.gif"/> </inline-formula>%, <inline-formula><tex-math notation="LaTeX">0.45</tex-math> <mml:math><mml:mn>0.45</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq3-3477971.gif"/> </inline-formula>%, and <inline-formula><tex-math notation="LaTeX">0.19</tex-math> <mml:math><mml:mn>0.19</mml:mn></mml:math><inline-graphic xlink:href="zhang-ieq4-3477971.gif"/> </inline-formula>% higher than the above methods, respectively. These improvements indicate that DP-FedASGP is a promising approach for balancing the privacy and accuracy of DP-FL.]]></abstract><pub>IEEE</pub><doi>10.1109/TC.2024.3477971</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-9884-8100</orcidid><orcidid>https://orcid.org/0000-0002-3472-1717</orcidid><orcidid>https://orcid.org/0009-0006-6946-5254</orcidid><orcidid>https://orcid.org/0000-0002-7035-6446</orcidid><orcidid>https://orcid.org/0000-0003-4196-3041</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 0018-9340 |
ispartof | IEEE transactions on computers, 2025-01, Vol.74 (1), p.278-292 |
issn | 0018-9340 1557-9956 |
language | eng |
recordid | cdi_ieee_primary_10713222 |
source | IEEE Xplore |
subjects | Accuracy Data models Differential privacy Federated learning Noise Perturbation methods Privacy Protection Servers significant gradient protection Training |
title | Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T01%3A37%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Balancing%20Privacy%20and%20Accuracy%20Using%20Significant%20Gradient%20Protection%20in%20Federated%20Learning&rft.jtitle=IEEE%20transactions%20on%20computers&rft.au=Zhang,%20Benteng&rft.date=2025-01&rft.volume=74&rft.issue=1&rft.spage=278&rft.epage=292&rft.pages=278-292&rft.issn=0018-9340&rft.eissn=1557-9956&rft.coden=ITCOB4&rft_id=info:doi/10.1109/TC.2024.3477971&rft_dat=%3Ccrossref_RIE%3E10_1109_TC_2024_3477971%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10713222&rfr_iscdi=true |