SE-PIM: In-Memory Acceleration of Data-Intensive Confidential Computing
Demand for data-intensive workloads and confidential computing are the prominent research directions shaping the future of cloud computing. Computer architectures are evolving to accommodate the computing of large data. Meanwhile, a plethora of works has explored protecting the confidentiality of th...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on cloud computing 2023-07, Vol.11 (3), p.2473-2490 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2490 |
---|---|
container_issue | 3 |
container_start_page | 2473 |
container_title | IEEE transactions on cloud computing |
container_volume | 11 |
creator | Duy, Kha Dinh Lee, Hojoon |
description | Demand for data-intensive workloads and confidential computing are the prominent research directions shaping the future of cloud computing. Computer architectures are evolving to accommodate the computing of large data. Meanwhile, a plethora of works has explored protecting the confidentiality of the in-cloud computation in the context of hardware-based secure enclaves. However, the approach has faced challenges in achieving efficient large data computation. In this article, we present a novel design, called se-pim , that retrofits Processing-In-Memory (PIM) as a data-intensive confidential computing accelerator. PIM-accelerated computation renders large data computation highly efficient by minimizing data movement. Based on our observation that moving computation closer to memory can achieve efficiency of computation and confidentiality of the processed information simultaneously, we study the advantages of confidential computing inside memory. We construct our findings into a software-hardware co-design called se-pim . Our design illustrates the advantages of PIM-based confidential computing acceleration. We study the challenges in adapting PIM in confidential computing and propose a set of imperative changes, as well as a programming model that can utilize them. Our evaluation shows se-pim can provide a side-channel resistant secure computation offloading and run data-intensive applications with negligible performance overhead compared to the baseline PIM model. |
doi_str_mv | 10.1109/TCC.2022.3207145 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9906059</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9906059</ieee_id><sourcerecordid>2861460032</sourcerecordid><originalsourceid>FETCH-LOGICAL-c286t-710b012ab9e01111b0677ec104163741431e695d6d5a63ceca9179082abdeee63</originalsourceid><addsrcrecordid>eNo9kEFLw0AQhRdRsNTeBS8Bz6kzu8lu1luJtQZaFKznZZtMJKXd1E0i9N-7pcW5zAy8N2_4GLtHmCKCflrn-ZQD51PBQWGSXrERF4rHAJhdhxllFiuUeMsmXbeFUFmKGvWILT7n8Uexeo4KF69o3_pjNCtL2pG3fdO6qK2jF9vbuHA9ua75pShvXd1U5PrG7sKyPwx9477v2E1tdx1NLn3Mvl7n6_wtXr4viny2jEueyT48ARtAbjeaAENtQCpFJUKCUqgEE4EkdVrJKrVSlFRajUpDFhwVEUkxZo_nuwff_gzU9WbbDt6FSBMCMJEAggcVnFWlb7vOU20OvtlbfzQI5kTMBGLmRMxciAXLw9nShJx_udYgIdXiD-gjZC0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2861460032</pqid></control><display><type>article</type><title>SE-PIM: In-Memory Acceleration of Data-Intensive Confidential Computing</title><source>IEEE Electronic Library (IEL)</source><creator>Duy, Kha Dinh ; Lee, Hojoon</creator><creatorcontrib>Duy, Kha Dinh ; Lee, Hojoon</creatorcontrib><description>Demand for data-intensive workloads and confidential computing are the prominent research directions shaping the future of cloud computing. Computer architectures are evolving to accommodate the computing of large data. Meanwhile, a plethora of works has explored protecting the confidentiality of the in-cloud computation in the context of hardware-based secure enclaves. However, the approach has faced challenges in achieving efficient large data computation. In this article, we present a novel design, called se-pim , that retrofits Processing-In-Memory (PIM) as a data-intensive confidential computing accelerator. PIM-accelerated computation renders large data computation highly efficient by minimizing data movement. Based on our observation that moving computation closer to memory can achieve efficiency of computation and confidentiality of the processed information simultaneously, we study the advantages of confidential computing inside memory. We construct our findings into a software-hardware co-design called se-pim . Our design illustrates the advantages of PIM-based confidential computing acceleration. We study the challenges in adapting PIM in confidential computing and propose a set of imperative changes, as well as a programming model that can utilize them. Our evaluation shows se-pim can provide a side-channel resistant secure computation offloading and run data-intensive applications with negligible performance overhead compared to the baseline PIM model.</description><identifier>ISSN: 2168-7161</identifier><identifier>EISSN: 2372-0018</identifier><identifier>DOI: 10.1109/TCC.2022.3207145</identifier><identifier>CODEN: ITCCF6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Cloud computing ; Co-design ; Computation offloading ; Computational efficiency ; Computational modeling ; Computer architecture ; Computer memory ; confidential computing ; Confidentiality ; Hardware ; Memory management ; Movement ; Processor-in-memory ; Random access memory ; Retrofitting</subject><ispartof>IEEE transactions on cloud computing, 2023-07, Vol.11 (3), p.2473-2490</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c286t-710b012ab9e01111b0677ec104163741431e695d6d5a63ceca9179082abdeee63</cites><orcidid>0000-0001-5344-6266 ; 0000-0002-6285-3506</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9906059$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,796,27922,27923,54756</link.rule.ids></links><search><creatorcontrib>Duy, Kha Dinh</creatorcontrib><creatorcontrib>Lee, Hojoon</creatorcontrib><title>SE-PIM: In-Memory Acceleration of Data-Intensive Confidential Computing</title><title>IEEE transactions on cloud computing</title><addtitle>TCC</addtitle><description>Demand for data-intensive workloads and confidential computing are the prominent research directions shaping the future of cloud computing. Computer architectures are evolving to accommodate the computing of large data. Meanwhile, a plethora of works has explored protecting the confidentiality of the in-cloud computation in the context of hardware-based secure enclaves. However, the approach has faced challenges in achieving efficient large data computation. In this article, we present a novel design, called se-pim , that retrofits Processing-In-Memory (PIM) as a data-intensive confidential computing accelerator. PIM-accelerated computation renders large data computation highly efficient by minimizing data movement. Based on our observation that moving computation closer to memory can achieve efficiency of computation and confidentiality of the processed information simultaneously, we study the advantages of confidential computing inside memory. We construct our findings into a software-hardware co-design called se-pim . Our design illustrates the advantages of PIM-based confidential computing acceleration. We study the challenges in adapting PIM in confidential computing and propose a set of imperative changes, as well as a programming model that can utilize them. Our evaluation shows se-pim can provide a side-channel resistant secure computation offloading and run data-intensive applications with negligible performance overhead compared to the baseline PIM model.</description><subject>Cloud computing</subject><subject>Co-design</subject><subject>Computation offloading</subject><subject>Computational efficiency</subject><subject>Computational modeling</subject><subject>Computer architecture</subject><subject>Computer memory</subject><subject>confidential computing</subject><subject>Confidentiality</subject><subject>Hardware</subject><subject>Memory management</subject><subject>Movement</subject><subject>Processor-in-memory</subject><subject>Random access memory</subject><subject>Retrofitting</subject><issn>2168-7161</issn><issn>2372-0018</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNo9kEFLw0AQhRdRsNTeBS8Bz6kzu8lu1luJtQZaFKznZZtMJKXd1E0i9N-7pcW5zAy8N2_4GLtHmCKCflrn-ZQD51PBQWGSXrERF4rHAJhdhxllFiuUeMsmXbeFUFmKGvWILT7n8Uexeo4KF69o3_pjNCtL2pG3fdO6qK2jF9vbuHA9ua75pShvXd1U5PrG7sKyPwx9477v2E1tdx1NLn3Mvl7n6_wtXr4viny2jEueyT48ARtAbjeaAENtQCpFJUKCUqgEE4EkdVrJKrVSlFRajUpDFhwVEUkxZo_nuwff_gzU9WbbDt6FSBMCMJEAggcVnFWlb7vOU20OvtlbfzQI5kTMBGLmRMxciAXLw9nShJx_udYgIdXiD-gjZC0</recordid><startdate>20230701</startdate><enddate>20230701</enddate><creator>Duy, Kha Dinh</creator><creator>Lee, Hojoon</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-5344-6266</orcidid><orcidid>https://orcid.org/0000-0002-6285-3506</orcidid></search><sort><creationdate>20230701</creationdate><title>SE-PIM: In-Memory Acceleration of Data-Intensive Confidential Computing</title><author>Duy, Kha Dinh ; Lee, Hojoon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c286t-710b012ab9e01111b0677ec104163741431e695d6d5a63ceca9179082abdeee63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Cloud computing</topic><topic>Co-design</topic><topic>Computation offloading</topic><topic>Computational efficiency</topic><topic>Computational modeling</topic><topic>Computer architecture</topic><topic>Computer memory</topic><topic>confidential computing</topic><topic>Confidentiality</topic><topic>Hardware</topic><topic>Memory management</topic><topic>Movement</topic><topic>Processor-in-memory</topic><topic>Random access memory</topic><topic>Retrofitting</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Duy, Kha Dinh</creatorcontrib><creatorcontrib>Lee, Hojoon</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on cloud computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Duy, Kha Dinh</au><au>Lee, Hojoon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SE-PIM: In-Memory Acceleration of Data-Intensive Confidential Computing</atitle><jtitle>IEEE transactions on cloud computing</jtitle><stitle>TCC</stitle><date>2023-07-01</date><risdate>2023</risdate><volume>11</volume><issue>3</issue><spage>2473</spage><epage>2490</epage><pages>2473-2490</pages><issn>2168-7161</issn><eissn>2372-0018</eissn><coden>ITCCF6</coden><abstract>Demand for data-intensive workloads and confidential computing are the prominent research directions shaping the future of cloud computing. Computer architectures are evolving to accommodate the computing of large data. Meanwhile, a plethora of works has explored protecting the confidentiality of the in-cloud computation in the context of hardware-based secure enclaves. However, the approach has faced challenges in achieving efficient large data computation. In this article, we present a novel design, called se-pim , that retrofits Processing-In-Memory (PIM) as a data-intensive confidential computing accelerator. PIM-accelerated computation renders large data computation highly efficient by minimizing data movement. Based on our observation that moving computation closer to memory can achieve efficiency of computation and confidentiality of the processed information simultaneously, we study the advantages of confidential computing inside memory. We construct our findings into a software-hardware co-design called se-pim . Our design illustrates the advantages of PIM-based confidential computing acceleration. We study the challenges in adapting PIM in confidential computing and propose a set of imperative changes, as well as a programming model that can utilize them. Our evaluation shows se-pim can provide a side-channel resistant secure computation offloading and run data-intensive applications with negligible performance overhead compared to the baseline PIM model.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TCC.2022.3207145</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0001-5344-6266</orcidid><orcidid>https://orcid.org/0000-0002-6285-3506</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2168-7161 |
ispartof | IEEE transactions on cloud computing, 2023-07, Vol.11 (3), p.2473-2490 |
issn | 2168-7161 2372-0018 |
language | eng |
recordid | cdi_ieee_primary_9906059 |
source | IEEE Electronic Library (IEL) |
subjects | Cloud computing Co-design Computation offloading Computational efficiency Computational modeling Computer architecture Computer memory confidential computing Confidentiality Hardware Memory management Movement Processor-in-memory Random access memory Retrofitting |
title | SE-PIM: In-Memory Acceleration of Data-Intensive Confidential Computing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T23%3A09%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SE-PIM:%20In-Memory%20Acceleration%20of%20Data-Intensive%20Confidential%20Computing&rft.jtitle=IEEE%20transactions%20on%20cloud%20computing&rft.au=Duy,%20Kha%20Dinh&rft.date=2023-07-01&rft.volume=11&rft.issue=3&rft.spage=2473&rft.epage=2490&rft.pages=2473-2490&rft.issn=2168-7161&rft.eissn=2372-0018&rft.coden=ITCCF6&rft_id=info:doi/10.1109/TCC.2022.3207145&rft_dat=%3Cproquest_ieee_%3E2861460032%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2861460032&rft_id=info:pmid/&rft_ieee_id=9906059&rfr_iscdi=true |