Making Split Learning Resilient to Label Leakage by Potential Energy Loss
As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage problem ca...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2022-10 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Zheng, Fei Chen, Chaochao Yao, Binhui Zheng, Xiaolin |
description | As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage problem caused by the trained split model, i.e., the attacker can use a few labeled samples to fine-tune the bottom model, and gets quite good performance. To prevent such kind of privacy leakage, we propose the potential energy loss to make the output of the bottom model become a more `complicated' distribution, by pushing outputs of the same class towards the decision boundary. Therefore, the adversary suffers a large generalization error when fine-tuning the bottom model with only a few leaked labeled samples. Experiment results show that our method significantly lowers the attacker's fine-tuning accuracy, making the split model more resilient to label leakage. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2726163375</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2726163375</sourcerecordid><originalsourceid>FETCH-proquest_journals_27261633753</originalsourceid><addsrcrecordid>eNqNytEKgjAUxvERBEn5Dge6FvSsafdhFBhEdS8TTjIdm23zwrdPoQfo6uPP71uxCDnPkuMBccNi77s0TTEvUAgesetN9sq08By0ClCRdGbJB3mlFZkAwUIlG9KL9bIlaCa42zCTkhpKQ66doLLe79j6LbWn-Ldbtj-Xr9MlGZz9jORD3dnRmZlqLDDPcs4Lwf97fQEl9jun</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2726163375</pqid></control><display><type>article</type><title>Making Split Learning Resilient to Label Leakage by Potential Energy Loss</title><source>Free E- Journals</source><creator>Zheng, Fei ; Chen, Chaochao ; Yao, Binhui ; Zheng, Xiaolin</creator><creatorcontrib>Zheng, Fei ; Chen, Chaochao ; Yao, Binhui ; Zheng, Xiaolin</creatorcontrib><description>As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage problem caused by the trained split model, i.e., the attacker can use a few labeled samples to fine-tune the bottom model, and gets quite good performance. To prevent such kind of privacy leakage, we propose the potential energy loss to make the output of the bottom model become a more `complicated' distribution, by pushing outputs of the same class towards the decision boundary. Therefore, the adversary suffers a large generalization error when fine-tuning the bottom model with only a few leaked labeled samples. Experiment results show that our method significantly lowers the attacker's fine-tuning accuracy, making the split model more resilient to label leakage.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Leakage ; Learning ; Potential energy ; Privacy</subject><ispartof>arXiv.org, 2022-10</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Zheng, Fei</creatorcontrib><creatorcontrib>Chen, Chaochao</creatorcontrib><creatorcontrib>Yao, Binhui</creatorcontrib><creatorcontrib>Zheng, Xiaolin</creatorcontrib><title>Making Split Learning Resilient to Label Leakage by Potential Energy Loss</title><title>arXiv.org</title><description>As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage problem caused by the trained split model, i.e., the attacker can use a few labeled samples to fine-tune the bottom model, and gets quite good performance. To prevent such kind of privacy leakage, we propose the potential energy loss to make the output of the bottom model become a more `complicated' distribution, by pushing outputs of the same class towards the decision boundary. Therefore, the adversary suffers a large generalization error when fine-tuning the bottom model with only a few leaked labeled samples. Experiment results show that our method significantly lowers the attacker's fine-tuning accuracy, making the split model more resilient to label leakage.</description><subject>Leakage</subject><subject>Learning</subject><subject>Potential energy</subject><subject>Privacy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNytEKgjAUxvERBEn5Dge6FvSsafdhFBhEdS8TTjIdm23zwrdPoQfo6uPP71uxCDnPkuMBccNi77s0TTEvUAgesetN9sq08By0ClCRdGbJB3mlFZkAwUIlG9KL9bIlaCa42zCTkhpKQ66doLLe79j6LbWn-Ldbtj-Xr9MlGZz9jORD3dnRmZlqLDDPcs4Lwf97fQEl9jun</recordid><startdate>20221018</startdate><enddate>20221018</enddate><creator>Zheng, Fei</creator><creator>Chen, Chaochao</creator><creator>Yao, Binhui</creator><creator>Zheng, Xiaolin</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221018</creationdate><title>Making Split Learning Resilient to Label Leakage by Potential Energy Loss</title><author>Zheng, Fei ; Chen, Chaochao ; Yao, Binhui ; Zheng, Xiaolin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27261633753</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Leakage</topic><topic>Learning</topic><topic>Potential energy</topic><topic>Privacy</topic><toplevel>online_resources</toplevel><creatorcontrib>Zheng, Fei</creatorcontrib><creatorcontrib>Chen, Chaochao</creatorcontrib><creatorcontrib>Yao, Binhui</creatorcontrib><creatorcontrib>Zheng, Xiaolin</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zheng, Fei</au><au>Chen, Chaochao</au><au>Yao, Binhui</au><au>Zheng, Xiaolin</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Making Split Learning Resilient to Label Leakage by Potential Energy Loss</atitle><jtitle>arXiv.org</jtitle><date>2022-10-18</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>As a practical privacy-preserving learning method, split learning has drawn much attention in academia and industry. However, its security is constantly being questioned since the intermediate results are shared during training and inference. In this paper, we focus on the privacy leakage problem caused by the trained split model, i.e., the attacker can use a few labeled samples to fine-tune the bottom model, and gets quite good performance. To prevent such kind of privacy leakage, we propose the potential energy loss to make the output of the bottom model become a more `complicated' distribution, by pushing outputs of the same class towards the decision boundary. Therefore, the adversary suffers a large generalization error when fine-tuning the bottom model with only a few leaked labeled samples. Experiment results show that our method significantly lowers the attacker's fine-tuning accuracy, making the split model more resilient to label leakage.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2022-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2726163375 |
source | Free E- Journals |
subjects | Leakage Learning Potential energy Privacy |
title | Making Split Learning Resilient to Label Leakage by Potential Energy Loss |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T13%3A03%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Making%20Split%20Learning%20Resilient%20to%20Label%20Leakage%20by%20Potential%20Energy%20Loss&rft.jtitle=arXiv.org&rft.au=Zheng,%20Fei&rft.date=2022-10-18&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2726163375%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2726163375&rft_id=info:pmid/&rfr_iscdi=true |