Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions
Secure Machine Learning as a Service (MLaaS) is a viable solution where clients seek secure ML computation delegation while protecting sensitive data. We propose an efficient method to securely evaluate deep standard convolutional neural networks based on residue number system variant of Cheon-Kim-K...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on dependable and secure computing 2024-08, p.1-12 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 12 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on dependable and secure computing |
container_volume | |
creator | Cheon, Jung Hee Kang, Minsik Kim, Taeseong Jung, Junyoung Yeo, Yongdong |
description | Secure Machine Learning as a Service (MLaaS) is a viable solution where clients seek secure ML computation delegation while protecting sensitive data. We propose an efficient method to securely evaluate deep standard convolutional neural networks based on residue number system variant of Cheon-Kim-Kim-Song (RNS-CKKS) scheme in the manner of batch inference. In particular, we introduce a packing method called Channel-By-Channel Packing that maximizes the slot compactness and Single-Instruction-Multiple-Data (SIMD) capabilities in ciphertexts. We also propose a new method for homomorphic convolution evaluation called Channel-By-Channel Convolution , which minimizes the additional heavy operations during convolution layers. Simulation results show that our work has improvements in amortized runtime for inference, with a factor of 5.04 and 5.20 for ResNet-20 and ResNet-110, respectively, compared to the previous results. We note that our results almost simulate the original backbone models, with classification accuracy differing from the backbone within 0.02%p. Furthermore, we show that the client's rotation key size generated and transmitted can be reduced from 105.6GB to 6.91GB for ResNet models during an MLaaS scenario. Finally, we show that our method can be combined with previous methods, providing flexibility for selecting batch sizes for inference. |
doi_str_mv | 10.1109/TDSC.2024.3448406 |
format | Article |
fullrecord | <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TDSC_2024_3448406</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10654756</ieee_id><sourcerecordid>10_1109_TDSC_2024_3448406</sourcerecordid><originalsourceid>FETCH-LOGICAL-c636-736ee73de7802b4d405f8c7cf32a70695e6de21da0ab07b2ec515cf4f30a292a3</originalsourceid><addsrcrecordid>eNpNkL1OwzAcxC0EEqXwAEgMfoEUfzsZaUpppQoGihgj1_mHBFKnslNQBt6dhHaobrgb7m74IXRLyYRSktyvZ6_phBEmJlyIWBB1hkY0ETQihMbnfZZCRjLR9BJdhfBJ-maciBH6nZrWlnjpCvDgLODG4RnADqeN-27qfVs1ztT4Gfb-39qfxn8F_F61JZ7v67rDi2bby-_KyuJHZ323Gzb4LVTuA6elcQ7qaNpFx3h6HK7RRWHqADdHH6P1_HGdLqLVy9MyfVhFVnEVaa4ANM9Bx4RtRC6ILGKrbcGZ0UQlElQOjOaGmA3RGwZWUmkLUXBiWMIMHyN6uLW-CcFDke18tTW-yyjJBnzZgC8b8GVHfP3m7rCpAOCkr6TQUvE_SRZurg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions</title><source>IEEE Electronic Library (IEL)</source><creator>Cheon, Jung Hee ; Kang, Minsik ; Kim, Taeseong ; Jung, Junyoung ; Yeo, Yongdong</creator><creatorcontrib>Cheon, Jung Hee ; Kang, Minsik ; Kim, Taeseong ; Jung, Junyoung ; Yeo, Yongdong</creatorcontrib><description>Secure Machine Learning as a Service (MLaaS) is a viable solution where clients seek secure ML computation delegation while protecting sensitive data. We propose an efficient method to securely evaluate deep standard convolutional neural networks based on residue number system variant of Cheon-Kim-Kim-Song (RNS-CKKS) scheme in the manner of batch inference. In particular, we introduce a packing method called Channel-By-Channel Packing that maximizes the slot compactness and Single-Instruction-Multiple-Data (SIMD) capabilities in ciphertexts. We also propose a new method for homomorphic convolution evaluation called Channel-By-Channel Convolution , which minimizes the additional heavy operations during convolution layers. Simulation results show that our work has improvements in amortized runtime for inference, with a factor of 5.04 and 5.20 for ResNet-20 and ResNet-110, respectively, compared to the previous results. We note that our results almost simulate the original backbone models, with classification accuracy differing from the backbone within 0.02%p. Furthermore, we show that the client's rotation key size generated and transmitted can be reduced from 105.6GB to 6.91GB for ResNet models during an MLaaS scenario. Finally, we show that our method can be combined with previous methods, providing flexibility for selecting batch sizes for inference.</description><identifier>ISSN: 1545-5971</identifier><identifier>EISSN: 1941-0018</identifier><identifier>DOI: 10.1109/TDSC.2024.3448406</identifier><identifier>CODEN: ITDSCM</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Convolutional neural network ; Convolutional neural networks ; Cryptography ; fully homomorphic encryption ; privacy-preserving machine learning ; ResNet ; Servers ; Single instruction multiple data ; Throughput ; Vectors</subject><ispartof>IEEE transactions on dependable and secure computing, 2024-08, p.1-12</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0009-0006-3826-4636 ; 0009-0007-9947-0377 ; 0000-0002-7085-2220</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10654756$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10654756$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Cheon, Jung Hee</creatorcontrib><creatorcontrib>Kang, Minsik</creatorcontrib><creatorcontrib>Kim, Taeseong</creatorcontrib><creatorcontrib>Jung, Junyoung</creatorcontrib><creatorcontrib>Yeo, Yongdong</creatorcontrib><title>Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions</title><title>IEEE transactions on dependable and secure computing</title><addtitle>TDSC</addtitle><description>Secure Machine Learning as a Service (MLaaS) is a viable solution where clients seek secure ML computation delegation while protecting sensitive data. We propose an efficient method to securely evaluate deep standard convolutional neural networks based on residue number system variant of Cheon-Kim-Kim-Song (RNS-CKKS) scheme in the manner of batch inference. In particular, we introduce a packing method called Channel-By-Channel Packing that maximizes the slot compactness and Single-Instruction-Multiple-Data (SIMD) capabilities in ciphertexts. We also propose a new method for homomorphic convolution evaluation called Channel-By-Channel Convolution , which minimizes the additional heavy operations during convolution layers. Simulation results show that our work has improvements in amortized runtime for inference, with a factor of 5.04 and 5.20 for ResNet-20 and ResNet-110, respectively, compared to the previous results. We note that our results almost simulate the original backbone models, with classification accuracy differing from the backbone within 0.02%p. Furthermore, we show that the client's rotation key size generated and transmitted can be reduced from 105.6GB to 6.91GB for ResNet models during an MLaaS scenario. Finally, we show that our method can be combined with previous methods, providing flexibility for selecting batch sizes for inference.</description><subject>Accuracy</subject><subject>Convolutional neural network</subject><subject>Convolutional neural networks</subject><subject>Cryptography</subject><subject>fully homomorphic encryption</subject><subject>privacy-preserving machine learning</subject><subject>ResNet</subject><subject>Servers</subject><subject>Single instruction multiple data</subject><subject>Throughput</subject><subject>Vectors</subject><issn>1545-5971</issn><issn>1941-0018</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkL1OwzAcxC0EEqXwAEgMfoEUfzsZaUpppQoGihgj1_mHBFKnslNQBt6dhHaobrgb7m74IXRLyYRSktyvZ6_phBEmJlyIWBB1hkY0ETQihMbnfZZCRjLR9BJdhfBJ-maciBH6nZrWlnjpCvDgLODG4RnADqeN-27qfVs1ztT4Gfb-39qfxn8F_F61JZ7v67rDi2bby-_KyuJHZ323Gzb4LVTuA6elcQ7qaNpFx3h6HK7RRWHqADdHH6P1_HGdLqLVy9MyfVhFVnEVaa4ANM9Bx4RtRC6ILGKrbcGZ0UQlElQOjOaGmA3RGwZWUmkLUXBiWMIMHyN6uLW-CcFDke18tTW-yyjJBnzZgC8b8GVHfP3m7rCpAOCkr6TQUvE_SRZurg</recordid><startdate>20240828</startdate><enddate>20240828</enddate><creator>Cheon, Jung Hee</creator><creator>Kang, Minsik</creator><creator>Kim, Taeseong</creator><creator>Jung, Junyoung</creator><creator>Yeo, Yongdong</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0009-0006-3826-4636</orcidid><orcidid>https://orcid.org/0009-0007-9947-0377</orcidid><orcidid>https://orcid.org/0000-0002-7085-2220</orcidid></search><sort><creationdate>20240828</creationdate><title>Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions</title><author>Cheon, Jung Hee ; Kang, Minsik ; Kim, Taeseong ; Jung, Junyoung ; Yeo, Yongdong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c636-736ee73de7802b4d405f8c7cf32a70695e6de21da0ab07b2ec515cf4f30a292a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Convolutional neural network</topic><topic>Convolutional neural networks</topic><topic>Cryptography</topic><topic>fully homomorphic encryption</topic><topic>privacy-preserving machine learning</topic><topic>ResNet</topic><topic>Servers</topic><topic>Single instruction multiple data</topic><topic>Throughput</topic><topic>Vectors</topic><toplevel>online_resources</toplevel><creatorcontrib>Cheon, Jung Hee</creatorcontrib><creatorcontrib>Kang, Minsik</creatorcontrib><creatorcontrib>Kim, Taeseong</creatorcontrib><creatorcontrib>Jung, Junyoung</creatorcontrib><creatorcontrib>Yeo, Yongdong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on dependable and secure computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cheon, Jung Hee</au><au>Kang, Minsik</au><au>Kim, Taeseong</au><au>Jung, Junyoung</au><au>Yeo, Yongdong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions</atitle><jtitle>IEEE transactions on dependable and secure computing</jtitle><stitle>TDSC</stitle><date>2024-08-28</date><risdate>2024</risdate><spage>1</spage><epage>12</epage><pages>1-12</pages><issn>1545-5971</issn><eissn>1941-0018</eissn><coden>ITDSCM</coden><abstract>Secure Machine Learning as a Service (MLaaS) is a viable solution where clients seek secure ML computation delegation while protecting sensitive data. We propose an efficient method to securely evaluate deep standard convolutional neural networks based on residue number system variant of Cheon-Kim-Kim-Song (RNS-CKKS) scheme in the manner of batch inference. In particular, we introduce a packing method called Channel-By-Channel Packing that maximizes the slot compactness and Single-Instruction-Multiple-Data (SIMD) capabilities in ciphertexts. We also propose a new method for homomorphic convolution evaluation called Channel-By-Channel Convolution , which minimizes the additional heavy operations during convolution layers. Simulation results show that our work has improvements in amortized runtime for inference, with a factor of 5.04 and 5.20 for ResNet-20 and ResNet-110, respectively, compared to the previous results. We note that our results almost simulate the original backbone models, with classification accuracy differing from the backbone within 0.02%p. Furthermore, we show that the client's rotation key size generated and transmitted can be reduced from 105.6GB to 6.91GB for ResNet models during an MLaaS scenario. Finally, we show that our method can be combined with previous methods, providing flexibility for selecting batch sizes for inference.</abstract><pub>IEEE</pub><doi>10.1109/TDSC.2024.3448406</doi><tpages>12</tpages><orcidid>https://orcid.org/0009-0006-3826-4636</orcidid><orcidid>https://orcid.org/0009-0007-9947-0377</orcidid><orcidid>https://orcid.org/0000-0002-7085-2220</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1545-5971 |
ispartof | IEEE transactions on dependable and secure computing, 2024-08, p.1-12 |
issn | 1545-5971 1941-0018 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TDSC_2024_3448406 |
source | IEEE Electronic Library (IEL) |
subjects | Accuracy Convolutional neural network Convolutional neural networks Cryptography fully homomorphic encryption privacy-preserving machine learning ResNet Servers Single instruction multiple data Throughput Vectors |
title | Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T01%3A05%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Batch%20Inference%20on%20Deep%20Convolutional%20Neural%20Networks%20With%20Fully%20Homomorphic%20Encryption%20Using%20Channel-By-Channel%20Convolutions&rft.jtitle=IEEE%20transactions%20on%20dependable%20and%20secure%20computing&rft.au=Cheon,%20Jung%20Hee&rft.date=2024-08-28&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.issn=1545-5971&rft.eissn=1941-0018&rft.coden=ITDSCM&rft_id=info:doi/10.1109/TDSC.2024.3448406&rft_dat=%3Ccrossref_RIE%3E10_1109_TDSC_2024_3448406%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10654756&rfr_iscdi=true |