CURE: Privacy-Preserving Split Learning Done Right

Training deep neural networks often requires large-scale datasets, necessitating storage and processing on cloud servers due to computational constraints. The procedures must follow strict privacy regulations in domains like healthcare. Split Learning (SL), a framework that divides model layers betw...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kanpak, Halil Ibrahim, Shabbir, Aqsa, Genç, Esra, Küpçü, Alptekin, Sav, Sinem
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kanpak, Halil Ibrahim
Shabbir, Aqsa
Genç, Esra
Küpçü, Alptekin
Sav, Sinem
description Training deep neural networks often requires large-scale datasets, necessitating storage and processing on cloud servers due to computational constraints. The procedures must follow strict privacy regulations in domains like healthcare. Split Learning (SL), a framework that divides model layers between client(s) and server(s), is widely adopted for distributed model training. While Split Learning reduces privacy risks by limiting server access to the full parameter set, previous research has identified that intermediate outputs exchanged between server and client can compromise client's data privacy. Homomorphic encryption (HE)-based solutions exist for this scenario but often impose prohibitive computational burdens. To address these challenges, we propose CURE, a novel system based on HE, that encrypts only the server side of the model and optionally the data. CURE enables secure SL while substantially improving communication and parallelization through advanced packing techniques. We propose two packing schemes that consume one HE level for one-layer networks and generalize our solutions to n-layer neural networks. We demonstrate that CURE can achieve similar accuracy to plaintext SL while being 16x more efficient in terms of the runtime compared to the state-of-the-art privacy-preserving alternatives.
doi_str_mv 10.48550/arxiv.2407.08977
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_08977</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_08977</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_089773</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zOwsDQ352Qwcg4NcrVSCCjKLEtMrtQNKEotTi0qy8xLVwguyMksUfBJTSzKA3Fd8vNSFYIy0zNKeBhY0xJzilN5oTQ3g7yba4izhy7Y9PiCoszcxKLKeJAt8WBbjAmrAABlVTDj</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CURE: Privacy-Preserving Split Learning Done Right</title><source>arXiv.org</source><creator>Kanpak, Halil Ibrahim ; Shabbir, Aqsa ; Genç, Esra ; Küpçü, Alptekin ; Sav, Sinem</creator><creatorcontrib>Kanpak, Halil Ibrahim ; Shabbir, Aqsa ; Genç, Esra ; Küpçü, Alptekin ; Sav, Sinem</creatorcontrib><description>Training deep neural networks often requires large-scale datasets, necessitating storage and processing on cloud servers due to computational constraints. The procedures must follow strict privacy regulations in domains like healthcare. Split Learning (SL), a framework that divides model layers between client(s) and server(s), is widely adopted for distributed model training. While Split Learning reduces privacy risks by limiting server access to the full parameter set, previous research has identified that intermediate outputs exchanged between server and client can compromise client's data privacy. Homomorphic encryption (HE)-based solutions exist for this scenario but often impose prohibitive computational burdens. To address these challenges, we propose CURE, a novel system based on HE, that encrypts only the server side of the model and optionally the data. CURE enables secure SL while substantially improving communication and parallelization through advanced packing techniques. We propose two packing schemes that consume one HE level for one-layer networks and generalize our solutions to n-layer neural networks. We demonstrate that CURE can achieve similar accuracy to plaintext SL while being 16x more efficient in terms of the runtime compared to the state-of-the-art privacy-preserving alternatives.</description><identifier>DOI: 10.48550/arxiv.2407.08977</identifier><language>eng</language><subject>Computer Science - Cryptography and Security</subject><creationdate>2024-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.08977$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.08977$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kanpak, Halil Ibrahim</creatorcontrib><creatorcontrib>Shabbir, Aqsa</creatorcontrib><creatorcontrib>Genç, Esra</creatorcontrib><creatorcontrib>Küpçü, Alptekin</creatorcontrib><creatorcontrib>Sav, Sinem</creatorcontrib><title>CURE: Privacy-Preserving Split Learning Done Right</title><description>Training deep neural networks often requires large-scale datasets, necessitating storage and processing on cloud servers due to computational constraints. The procedures must follow strict privacy regulations in domains like healthcare. Split Learning (SL), a framework that divides model layers between client(s) and server(s), is widely adopted for distributed model training. While Split Learning reduces privacy risks by limiting server access to the full parameter set, previous research has identified that intermediate outputs exchanged between server and client can compromise client's data privacy. Homomorphic encryption (HE)-based solutions exist for this scenario but often impose prohibitive computational burdens. To address these challenges, we propose CURE, a novel system based on HE, that encrypts only the server side of the model and optionally the data. CURE enables secure SL while substantially improving communication and parallelization through advanced packing techniques. We propose two packing schemes that consume one HE level for one-layer networks and generalize our solutions to n-layer neural networks. We demonstrate that CURE can achieve similar accuracy to plaintext SL while being 16x more efficient in terms of the runtime compared to the state-of-the-art privacy-preserving alternatives.</description><subject>Computer Science - Cryptography and Security</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zOwsDQ352Qwcg4NcrVSCCjKLEtMrtQNKEotTi0qy8xLVwguyMksUfBJTSzKA3Fd8vNSFYIy0zNKeBhY0xJzilN5oTQ3g7yba4izhy7Y9PiCoszcxKLKeJAt8WBbjAmrAABlVTDj</recordid><startdate>20240712</startdate><enddate>20240712</enddate><creator>Kanpak, Halil Ibrahim</creator><creator>Shabbir, Aqsa</creator><creator>Genç, Esra</creator><creator>Küpçü, Alptekin</creator><creator>Sav, Sinem</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240712</creationdate><title>CURE: Privacy-Preserving Split Learning Done Right</title><author>Kanpak, Halil Ibrahim ; Shabbir, Aqsa ; Genç, Esra ; Küpçü, Alptekin ; Sav, Sinem</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_089773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Cryptography and Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Kanpak, Halil Ibrahim</creatorcontrib><creatorcontrib>Shabbir, Aqsa</creatorcontrib><creatorcontrib>Genç, Esra</creatorcontrib><creatorcontrib>Küpçü, Alptekin</creatorcontrib><creatorcontrib>Sav, Sinem</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kanpak, Halil Ibrahim</au><au>Shabbir, Aqsa</au><au>Genç, Esra</au><au>Küpçü, Alptekin</au><au>Sav, Sinem</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CURE: Privacy-Preserving Split Learning Done Right</atitle><date>2024-07-12</date><risdate>2024</risdate><abstract>Training deep neural networks often requires large-scale datasets, necessitating storage and processing on cloud servers due to computational constraints. The procedures must follow strict privacy regulations in domains like healthcare. Split Learning (SL), a framework that divides model layers between client(s) and server(s), is widely adopted for distributed model training. While Split Learning reduces privacy risks by limiting server access to the full parameter set, previous research has identified that intermediate outputs exchanged between server and client can compromise client's data privacy. Homomorphic encryption (HE)-based solutions exist for this scenario but often impose prohibitive computational burdens. To address these challenges, we propose CURE, a novel system based on HE, that encrypts only the server side of the model and optionally the data. CURE enables secure SL while substantially improving communication and parallelization through advanced packing techniques. We propose two packing schemes that consume one HE level for one-layer networks and generalize our solutions to n-layer neural networks. We demonstrate that CURE can achieve similar accuracy to plaintext SL while being 16x more efficient in terms of the runtime compared to the state-of-the-art privacy-preserving alternatives.</abstract><doi>10.48550/arxiv.2407.08977</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.08977
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_08977
source arXiv.org
subjects Computer Science - Cryptography and Security
title CURE: Privacy-Preserving Split Learning Done Right
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T03%3A57%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CURE:%20Privacy-Preserving%20Split%20Learning%20Done%20Right&rft.au=Kanpak,%20Halil%20Ibrahim&rft.date=2024-07-12&rft_id=info:doi/10.48550/arxiv.2407.08977&rft_dat=%3Carxiv_GOX%3E2407_08977%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true