Local differentially private federated learning with homomorphic encryption
Federated learning (FL) is an emerging distributed machine learning paradigm without revealing private local data for privacy-preserving. However, there are still limitations. On one hand, user’ privacy can be deduced from local outputs. On the other hand, privacy, efficiency, and accuracy are hard...
Gespeichert in:
Veröffentlicht in: | The Journal of supercomputing 2023-11, Vol.79 (17), p.19365-19395 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 19395 |
---|---|
container_issue | 17 |
container_start_page | 19365 |
container_title | The Journal of supercomputing |
container_volume | 79 |
creator | Zhao, Jianzhe Huang, Chenxi Wang, Wenji Xie, Rulin Dong, Rongrong Matwin, Stan |
description | Federated learning (FL) is an emerging distributed machine learning paradigm without revealing private local data for privacy-preserving. However, there are still limitations. On one hand, user’ privacy can be deduced from local outputs. On the other hand, privacy, efficiency, and accuracy are hard to fulfill for conflicting goals. To tackle these problems, we propose a novel privacy-preserving FL (HEFL-LDP) algorithm, which integrates semi-homomorphic encryption and local differential privacy. With the reduction of computational and communication burden, HEFL-LDP resists model inversion attacks and membership inference attacks from a server or malicious client. Moreover, a new utility optimization strategy with accuracy-oriented privacy parameter adjustment and model shuffling is proposed to solve the problem of accuracy decline. The security and cost of the algorithm are verified through theoretical analysis and proof. Comprehensive experimental evaluations on the MNIST dataset and CIFAR-10 dataset demonstrate that HEFL-LDP significantly reduces the privacy budget and outperforms existing algorithms in computational cost and accuracy. |
doi_str_mv | 10.1007/s11227-023-05378-x |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2871753210</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2871753210</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-cc3ff2d5e178fb4de2458ea5136993772746ba49f0493655025fa6c8c49b3d5f3</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wNOC52g-N8lRil-44EXPIc0mbco2uyZbbf-90RW8yRxmBt73HeYB4BKja4yQuMkYEyIgIhQiToWE-yMww1yUlUl2DGZIEQQlZ-QUnOW8QQgxKugMPDe9NV3VBu9dcnEMpusO1ZDChxld5V3rUhnaqnMmxRBX1WcY19W635ZKwzrYykWbDsMY-ngOTrzpsrv47XPwdn_3uniEzcvD0-K2gZZiNUJrqfek5Q4L6ZesdYRx6QzHtFaKCkEEq5eGKY-YojXniHBvaistU0vack_n4GrKHVL_vnN51Jt-l2I5qYkUWHBKMCoqMqls6nNOzuvy1dakg8ZIf0PTEzRdoOkfaHpfTHQy5SKOK5f-ov9xfQHd63Ch</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2871753210</pqid></control><display><type>article</type><title>Local differentially private federated learning with homomorphic encryption</title><source>Springer Nature - Complete Springer Journals</source><creator>Zhao, Jianzhe ; Huang, Chenxi ; Wang, Wenji ; Xie, Rulin ; Dong, Rongrong ; Matwin, Stan</creator><creatorcontrib>Zhao, Jianzhe ; Huang, Chenxi ; Wang, Wenji ; Xie, Rulin ; Dong, Rongrong ; Matwin, Stan</creatorcontrib><description>Federated learning (FL) is an emerging distributed machine learning paradigm without revealing private local data for privacy-preserving. However, there are still limitations. On one hand, user’ privacy can be deduced from local outputs. On the other hand, privacy, efficiency, and accuracy are hard to fulfill for conflicting goals. To tackle these problems, we propose a novel privacy-preserving FL (HEFL-LDP) algorithm, which integrates semi-homomorphic encryption and local differential privacy. With the reduction of computational and communication burden, HEFL-LDP resists model inversion attacks and membership inference attacks from a server or malicious client. Moreover, a new utility optimization strategy with accuracy-oriented privacy parameter adjustment and model shuffling is proposed to solve the problem of accuracy decline. The security and cost of the algorithm are verified through theoretical analysis and proof. Comprehensive experimental evaluations on the MNIST dataset and CIFAR-10 dataset demonstrate that HEFL-LDP significantly reduces the privacy budget and outperforms existing algorithms in computational cost and accuracy.</description><identifier>ISSN: 0920-8542</identifier><identifier>EISSN: 1573-0484</identifier><identifier>DOI: 10.1007/s11227-023-05378-x</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Algorithms ; Compilers ; Computer Science ; Computing costs ; Cost analysis ; Datasets ; Federated learning ; Interpreters ; Machine learning ; Optimization ; Privacy ; Processor Architectures ; Programming Languages</subject><ispartof>The Journal of supercomputing, 2023-11, Vol.79 (17), p.19365-19395</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-cc3ff2d5e178fb4de2458ea5136993772746ba49f0493655025fa6c8c49b3d5f3</citedby><cites>FETCH-LOGICAL-c319t-cc3ff2d5e178fb4de2458ea5136993772746ba49f0493655025fa6c8c49b3d5f3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11227-023-05378-x$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11227-023-05378-x$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,777,781,27905,27906,41469,42538,51300</link.rule.ids></links><search><creatorcontrib>Zhao, Jianzhe</creatorcontrib><creatorcontrib>Huang, Chenxi</creatorcontrib><creatorcontrib>Wang, Wenji</creatorcontrib><creatorcontrib>Xie, Rulin</creatorcontrib><creatorcontrib>Dong, Rongrong</creatorcontrib><creatorcontrib>Matwin, Stan</creatorcontrib><title>Local differentially private federated learning with homomorphic encryption</title><title>The Journal of supercomputing</title><addtitle>J Supercomput</addtitle><description>Federated learning (FL) is an emerging distributed machine learning paradigm without revealing private local data for privacy-preserving. However, there are still limitations. On one hand, user’ privacy can be deduced from local outputs. On the other hand, privacy, efficiency, and accuracy are hard to fulfill for conflicting goals. To tackle these problems, we propose a novel privacy-preserving FL (HEFL-LDP) algorithm, which integrates semi-homomorphic encryption and local differential privacy. With the reduction of computational and communication burden, HEFL-LDP resists model inversion attacks and membership inference attacks from a server or malicious client. Moreover, a new utility optimization strategy with accuracy-oriented privacy parameter adjustment and model shuffling is proposed to solve the problem of accuracy decline. The security and cost of the algorithm are verified through theoretical analysis and proof. Comprehensive experimental evaluations on the MNIST dataset and CIFAR-10 dataset demonstrate that HEFL-LDP significantly reduces the privacy budget and outperforms existing algorithms in computational cost and accuracy.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Compilers</subject><subject>Computer Science</subject><subject>Computing costs</subject><subject>Cost analysis</subject><subject>Datasets</subject><subject>Federated learning</subject><subject>Interpreters</subject><subject>Machine learning</subject><subject>Optimization</subject><subject>Privacy</subject><subject>Processor Architectures</subject><subject>Programming Languages</subject><issn>0920-8542</issn><issn>1573-0484</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWKt_wNOC52g-N8lRil-44EXPIc0mbco2uyZbbf-90RW8yRxmBt73HeYB4BKja4yQuMkYEyIgIhQiToWE-yMww1yUlUl2DGZIEQQlZ-QUnOW8QQgxKugMPDe9NV3VBu9dcnEMpusO1ZDChxld5V3rUhnaqnMmxRBX1WcY19W635ZKwzrYykWbDsMY-ngOTrzpsrv47XPwdn_3uniEzcvD0-K2gZZiNUJrqfek5Q4L6ZesdYRx6QzHtFaKCkEEq5eGKY-YojXniHBvaistU0vack_n4GrKHVL_vnN51Jt-l2I5qYkUWHBKMCoqMqls6nNOzuvy1dakg8ZIf0PTEzRdoOkfaHpfTHQy5SKOK5f-ov9xfQHd63Ch</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Zhao, Jianzhe</creator><creator>Huang, Chenxi</creator><creator>Wang, Wenji</creator><creator>Xie, Rulin</creator><creator>Dong, Rongrong</creator><creator>Matwin, Stan</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20231101</creationdate><title>Local differentially private federated learning with homomorphic encryption</title><author>Zhao, Jianzhe ; Huang, Chenxi ; Wang, Wenji ; Xie, Rulin ; Dong, Rongrong ; Matwin, Stan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-cc3ff2d5e178fb4de2458ea5136993772746ba49f0493655025fa6c8c49b3d5f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Compilers</topic><topic>Computer Science</topic><topic>Computing costs</topic><topic>Cost analysis</topic><topic>Datasets</topic><topic>Federated learning</topic><topic>Interpreters</topic><topic>Machine learning</topic><topic>Optimization</topic><topic>Privacy</topic><topic>Processor Architectures</topic><topic>Programming Languages</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Jianzhe</creatorcontrib><creatorcontrib>Huang, Chenxi</creatorcontrib><creatorcontrib>Wang, Wenji</creatorcontrib><creatorcontrib>Xie, Rulin</creatorcontrib><creatorcontrib>Dong, Rongrong</creatorcontrib><creatorcontrib>Matwin, Stan</creatorcontrib><collection>CrossRef</collection><jtitle>The Journal of supercomputing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhao, Jianzhe</au><au>Huang, Chenxi</au><au>Wang, Wenji</au><au>Xie, Rulin</au><au>Dong, Rongrong</au><au>Matwin, Stan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Local differentially private federated learning with homomorphic encryption</atitle><jtitle>The Journal of supercomputing</jtitle><stitle>J Supercomput</stitle><date>2023-11-01</date><risdate>2023</risdate><volume>79</volume><issue>17</issue><spage>19365</spage><epage>19395</epage><pages>19365-19395</pages><issn>0920-8542</issn><eissn>1573-0484</eissn><abstract>Federated learning (FL) is an emerging distributed machine learning paradigm without revealing private local data for privacy-preserving. However, there are still limitations. On one hand, user’ privacy can be deduced from local outputs. On the other hand, privacy, efficiency, and accuracy are hard to fulfill for conflicting goals. To tackle these problems, we propose a novel privacy-preserving FL (HEFL-LDP) algorithm, which integrates semi-homomorphic encryption and local differential privacy. With the reduction of computational and communication burden, HEFL-LDP resists model inversion attacks and membership inference attacks from a server or malicious client. Moreover, a new utility optimization strategy with accuracy-oriented privacy parameter adjustment and model shuffling is proposed to solve the problem of accuracy decline. The security and cost of the algorithm are verified through theoretical analysis and proof. Comprehensive experimental evaluations on the MNIST dataset and CIFAR-10 dataset demonstrate that HEFL-LDP significantly reduces the privacy budget and outperforms existing algorithms in computational cost and accuracy.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11227-023-05378-x</doi><tpages>31</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0920-8542 |
ispartof | The Journal of supercomputing, 2023-11, Vol.79 (17), p.19365-19395 |
issn | 0920-8542 1573-0484 |
language | eng |
recordid | cdi_proquest_journals_2871753210 |
source | Springer Nature - Complete Springer Journals |
subjects | Accuracy Algorithms Compilers Computer Science Computing costs Cost analysis Datasets Federated learning Interpreters Machine learning Optimization Privacy Processor Architectures Programming Languages |
title | Local differentially private federated learning with homomorphic encryption |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T22%3A43%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Local%20differentially%20private%20federated%20learning%20with%20homomorphic%20encryption&rft.jtitle=The%20Journal%20of%20supercomputing&rft.au=Zhao,%20Jianzhe&rft.date=2023-11-01&rft.volume=79&rft.issue=17&rft.spage=19365&rft.epage=19395&rft.pages=19365-19395&rft.issn=0920-8542&rft.eissn=1573-0484&rft_id=info:doi/10.1007/s11227-023-05378-x&rft_dat=%3Cproquest_cross%3E2871753210%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2871753210&rft_id=info:pmid/&rfr_iscdi=true |