Privacy-Preserving Hierarchical Model-Distributed Inference

This paper focuses on designing a privacy-preserving Machine Learning (ML) inference protocol for a hierarchical setup, where clients own/generate data, model owners (cloud servers) have a pre-trained ML model, and edge servers perform ML inference on clients' data using the cloud server's...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dehkordi, Fatemeh Jafarian, Keshtkarjahromi, Yasaman, Seferoglu, Hulya
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Dehkordi, Fatemeh Jafarian
Keshtkarjahromi, Yasaman
Seferoglu, Hulya
description This paper focuses on designing a privacy-preserving Machine Learning (ML) inference protocol for a hierarchical setup, where clients own/generate data, model owners (cloud servers) have a pre-trained ML model, and edge servers perform ML inference on clients' data using the cloud server's ML model. Our goal is to speed up ML inference while providing privacy to both data and the ML model. Our approach (i) uses model-distributed inference (model parallelization) at the edge servers and (ii) reduces the amount of communication to/from the cloud server. Our privacy-preserving hierarchical model-distributed inference, privateMDI design uses additive secret sharing and linearly homomorphic encryption to handle linear calculations in the ML inference, and garbled circuit and a novel three-party oblivious transfer are used to handle non-linear functions. privateMDI consists of offline and online phases. We designed these phases in a way that most of the data exchange is done in the offline phase while the communication overhead of the online phase is reduced. In particular, there is no communication to/from the cloud server in the online phase, and the amount of communication between the client and edge servers is minimized. The experimental results demonstrate that privateMDI significantly reduces the ML inference time as compared to the baselines.
doi_str_mv 10.48550/arxiv.2407.18353
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2407_18353</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2407_18353</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2407_183533</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zO0MDY15mSwDijKLEtMrtQNKEotTi0qy8xLV_DITC1KLErOyExOzFHwzU9JzdF1ySwuKcpMKi1JTVHwzEtLLUrNS07lYWBNS8wpTuWF0twM8m6uIc4eumBr4guKMnMTiyrjQdbFg60zJqwCAPtgNVE</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Privacy-Preserving Hierarchical Model-Distributed Inference</title><source>arXiv.org</source><creator>Dehkordi, Fatemeh Jafarian ; Keshtkarjahromi, Yasaman ; Seferoglu, Hulya</creator><creatorcontrib>Dehkordi, Fatemeh Jafarian ; Keshtkarjahromi, Yasaman ; Seferoglu, Hulya</creatorcontrib><description>This paper focuses on designing a privacy-preserving Machine Learning (ML) inference protocol for a hierarchical setup, where clients own/generate data, model owners (cloud servers) have a pre-trained ML model, and edge servers perform ML inference on clients' data using the cloud server's ML model. Our goal is to speed up ML inference while providing privacy to both data and the ML model. Our approach (i) uses model-distributed inference (model parallelization) at the edge servers and (ii) reduces the amount of communication to/from the cloud server. Our privacy-preserving hierarchical model-distributed inference, privateMDI design uses additive secret sharing and linearly homomorphic encryption to handle linear calculations in the ML inference, and garbled circuit and a novel three-party oblivious transfer are used to handle non-linear functions. privateMDI consists of offline and online phases. We designed these phases in a way that most of the data exchange is done in the offline phase while the communication overhead of the online phase is reduced. In particular, there is no communication to/from the cloud server in the online phase, and the amount of communication between the client and edge servers is minimized. The experimental results demonstrate that privateMDI significantly reduces the ML inference time as compared to the baselines.</description><identifier>DOI: 10.48550/arxiv.2407.18353</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2024-07</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2407.18353$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2407.18353$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dehkordi, Fatemeh Jafarian</creatorcontrib><creatorcontrib>Keshtkarjahromi, Yasaman</creatorcontrib><creatorcontrib>Seferoglu, Hulya</creatorcontrib><title>Privacy-Preserving Hierarchical Model-Distributed Inference</title><description>This paper focuses on designing a privacy-preserving Machine Learning (ML) inference protocol for a hierarchical setup, where clients own/generate data, model owners (cloud servers) have a pre-trained ML model, and edge servers perform ML inference on clients' data using the cloud server's ML model. Our goal is to speed up ML inference while providing privacy to both data and the ML model. Our approach (i) uses model-distributed inference (model parallelization) at the edge servers and (ii) reduces the amount of communication to/from the cloud server. Our privacy-preserving hierarchical model-distributed inference, privateMDI design uses additive secret sharing and linearly homomorphic encryption to handle linear calculations in the ML inference, and garbled circuit and a novel three-party oblivious transfer are used to handle non-linear functions. privateMDI consists of offline and online phases. We designed these phases in a way that most of the data exchange is done in the offline phase while the communication overhead of the online phase is reduced. In particular, there is no communication to/from the cloud server in the online phase, and the amount of communication between the client and edge servers is minimized. The experimental results demonstrate that privateMDI significantly reduces the ML inference time as compared to the baselines.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1zO0MDY15mSwDijKLEtMrtQNKEotTi0qy8xLV_DITC1KLErOyExOzFHwzU9JzdF1ySwuKcpMKi1JTVHwzEtLLUrNS07lYWBNS8wpTuWF0twM8m6uIc4eumBr4guKMnMTiyrjQdbFg60zJqwCAPtgNVE</recordid><startdate>20240725</startdate><enddate>20240725</enddate><creator>Dehkordi, Fatemeh Jafarian</creator><creator>Keshtkarjahromi, Yasaman</creator><creator>Seferoglu, Hulya</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240725</creationdate><title>Privacy-Preserving Hierarchical Model-Distributed Inference</title><author>Dehkordi, Fatemeh Jafarian ; Keshtkarjahromi, Yasaman ; Seferoglu, Hulya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2407_183533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Dehkordi, Fatemeh Jafarian</creatorcontrib><creatorcontrib>Keshtkarjahromi, Yasaman</creatorcontrib><creatorcontrib>Seferoglu, Hulya</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dehkordi, Fatemeh Jafarian</au><au>Keshtkarjahromi, Yasaman</au><au>Seferoglu, Hulya</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Privacy-Preserving Hierarchical Model-Distributed Inference</atitle><date>2024-07-25</date><risdate>2024</risdate><abstract>This paper focuses on designing a privacy-preserving Machine Learning (ML) inference protocol for a hierarchical setup, where clients own/generate data, model owners (cloud servers) have a pre-trained ML model, and edge servers perform ML inference on clients' data using the cloud server's ML model. Our goal is to speed up ML inference while providing privacy to both data and the ML model. Our approach (i) uses model-distributed inference (model parallelization) at the edge servers and (ii) reduces the amount of communication to/from the cloud server. Our privacy-preserving hierarchical model-distributed inference, privateMDI design uses additive secret sharing and linearly homomorphic encryption to handle linear calculations in the ML inference, and garbled circuit and a novel three-party oblivious transfer are used to handle non-linear functions. privateMDI consists of offline and online phases. We designed these phases in a way that most of the data exchange is done in the offline phase while the communication overhead of the online phase is reduced. In particular, there is no communication to/from the cloud server in the online phase, and the amount of communication between the client and edge servers is minimized. The experimental results demonstrate that privateMDI significantly reduces the ML inference time as compared to the baselines.</abstract><doi>10.48550/arxiv.2407.18353</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2407.18353
ispartof
issn
language eng
recordid cdi_arxiv_primary_2407_18353
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title Privacy-Preserving Hierarchical Model-Distributed Inference
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T13%3A06%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Privacy-Preserving%20Hierarchical%20Model-Distributed%20Inference&rft.au=Dehkordi,%20Fatemeh%20Jafarian&rft.date=2024-07-25&rft_id=info:doi/10.48550/arxiv.2407.18353&rft_dat=%3Carxiv_GOX%3E2407_18353%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true