Predictable Shared Cache Management for Multi-Core Real-Time Virtualization

Real-time virtualization has gained much attention for the consolidation of multiple real-time systems onto a single hardware platform while ensuring timing predictability. However, a shared last-level cache (LLC) on modern multi-core platforms can easily hamper the timing predictability of real-tim...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on embedded computing systems 2018-01, Vol.17 (1), p.1-27
Hauptverfasser: Kim, Hyoseung, Rajkumar, Ragunathan (Raj)
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 27
container_issue 1
container_start_page 1
container_title ACM transactions on embedded computing systems
container_volume 17
creator Kim, Hyoseung
Rajkumar, Ragunathan (Raj)
description Real-time virtualization has gained much attention for the consolidation of multiple real-time systems onto a single hardware platform while ensuring timing predictability. However, a shared last-level cache (LLC) on modern multi-core platforms can easily hamper the timing predictability of real-time virtualization due to the resulting temporal interference among consolidated workloads. Since such interference caused by the LLC is highly variable and may have not even existed in legacy systems to be consolidated, it poses a significant challenge for real-time virtualization. In this article, we propose a predictable shared cache management framework for multi-core real-time virtualization. Our framework introduces two hypervisor-level techniques, vLLC and vColoring, that enable the cache allocation of individual tasks running in a virtual machine (VM), which is not achievable by the current state of the art. Our framework also provides a cache management scheme that determines cache allocation to tasks, designs VMs in a cache-aware manner, and minimizes the aggregated utilization of VMs to be consolidated. As a proof of concept, we implemented vLLC and vColoring in the KVM hypervisor running on x86 and ARM multi-core platforms. Experimental results with three different guest OSs (i.e., Linux/RK, vanilla Linux, and MS Windows Embedded) show that our techniques can effectively control the cache allocation of tasks in VMs. Our cache management scheme yields a significant utilization benefit compared to other approaches while satisfying timing constraints.
doi_str_mv 10.1145/3092946
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3092946</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1145_3092946</sourcerecordid><originalsourceid>FETCH-LOGICAL-c225t-56bad9c0cb34b41150cf85bf05dfb7339e44572d7d25707df053fdb9ce6a55073</originalsourceid><addsrcrecordid>eNotkEtLAzEYRYMoWKv4F7JzFc3rm0yWMqgVWxStboc8bWQekkkX-uttsat7Lwfu4iB0yeg1YxJuBNVcy-oIzRhATYSs4HjfhSaa1uoUnU3TF6VMcQkz9PSSg0-uGNsF_LYxu4Ub4zYBr8xgPkMfhoLjmPFq25VEmjEH_BpMR9apD_gj5bI1Xfo1JY3DOTqJppvCxSHn6P3-bt0syPL54bG5XRLHORQClTVeO-qskFYyBtTFGmyk4KNVQuggJSjuleegqPI7IKK32oXKAFAl5ujq_9flcZpyiO13Tr3JPy2j7d5Be3Ag_gCm203S</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Predictable Shared Cache Management for Multi-Core Real-Time Virtualization</title><source>ACM Digital Library Complete</source><creator>Kim, Hyoseung ; Rajkumar, Ragunathan (Raj)</creator><creatorcontrib>Kim, Hyoseung ; Rajkumar, Ragunathan (Raj)</creatorcontrib><description>Real-time virtualization has gained much attention for the consolidation of multiple real-time systems onto a single hardware platform while ensuring timing predictability. However, a shared last-level cache (LLC) on modern multi-core platforms can easily hamper the timing predictability of real-time virtualization due to the resulting temporal interference among consolidated workloads. Since such interference caused by the LLC is highly variable and may have not even existed in legacy systems to be consolidated, it poses a significant challenge for real-time virtualization. In this article, we propose a predictable shared cache management framework for multi-core real-time virtualization. Our framework introduces two hypervisor-level techniques, vLLC and vColoring, that enable the cache allocation of individual tasks running in a virtual machine (VM), which is not achievable by the current state of the art. Our framework also provides a cache management scheme that determines cache allocation to tasks, designs VMs in a cache-aware manner, and minimizes the aggregated utilization of VMs to be consolidated. As a proof of concept, we implemented vLLC and vColoring in the KVM hypervisor running on x86 and ARM multi-core platforms. Experimental results with three different guest OSs (i.e., Linux/RK, vanilla Linux, and MS Windows Embedded) show that our techniques can effectively control the cache allocation of tasks in VMs. Our cache management scheme yields a significant utilization benefit compared to other approaches while satisfying timing constraints.</description><identifier>ISSN: 1539-9087</identifier><identifier>EISSN: 1558-3465</identifier><identifier>DOI: 10.1145/3092946</identifier><language>eng</language><ispartof>ACM transactions on embedded computing systems, 2018-01, Vol.17 (1), p.1-27</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c225t-56bad9c0cb34b41150cf85bf05dfb7339e44572d7d25707df053fdb9ce6a55073</citedby><cites>FETCH-LOGICAL-c225t-56bad9c0cb34b41150cf85bf05dfb7339e44572d7d25707df053fdb9ce6a55073</cites><orcidid>0000-0002-8553-732X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Kim, Hyoseung</creatorcontrib><creatorcontrib>Rajkumar, Ragunathan (Raj)</creatorcontrib><title>Predictable Shared Cache Management for Multi-Core Real-Time Virtualization</title><title>ACM transactions on embedded computing systems</title><description>Real-time virtualization has gained much attention for the consolidation of multiple real-time systems onto a single hardware platform while ensuring timing predictability. However, a shared last-level cache (LLC) on modern multi-core platforms can easily hamper the timing predictability of real-time virtualization due to the resulting temporal interference among consolidated workloads. Since such interference caused by the LLC is highly variable and may have not even existed in legacy systems to be consolidated, it poses a significant challenge for real-time virtualization. In this article, we propose a predictable shared cache management framework for multi-core real-time virtualization. Our framework introduces two hypervisor-level techniques, vLLC and vColoring, that enable the cache allocation of individual tasks running in a virtual machine (VM), which is not achievable by the current state of the art. Our framework also provides a cache management scheme that determines cache allocation to tasks, designs VMs in a cache-aware manner, and minimizes the aggregated utilization of VMs to be consolidated. As a proof of concept, we implemented vLLC and vColoring in the KVM hypervisor running on x86 and ARM multi-core platforms. Experimental results with three different guest OSs (i.e., Linux/RK, vanilla Linux, and MS Windows Embedded) show that our techniques can effectively control the cache allocation of tasks in VMs. Our cache management scheme yields a significant utilization benefit compared to other approaches while satisfying timing constraints.</description><issn>1539-9087</issn><issn>1558-3465</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNotkEtLAzEYRYMoWKv4F7JzFc3rm0yWMqgVWxStboc8bWQekkkX-uttsat7Lwfu4iB0yeg1YxJuBNVcy-oIzRhATYSs4HjfhSaa1uoUnU3TF6VMcQkz9PSSg0-uGNsF_LYxu4Ub4zYBr8xgPkMfhoLjmPFq25VEmjEH_BpMR9apD_gj5bI1Xfo1JY3DOTqJppvCxSHn6P3-bt0syPL54bG5XRLHORQClTVeO-qskFYyBtTFGmyk4KNVQuggJSjuleegqPI7IKK32oXKAFAl5ujq_9flcZpyiO13Tr3JPy2j7d5Be3Ag_gCm203S</recordid><startdate>20180131</startdate><enddate>20180131</enddate><creator>Kim, Hyoseung</creator><creator>Rajkumar, Ragunathan (Raj)</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-8553-732X</orcidid></search><sort><creationdate>20180131</creationdate><title>Predictable Shared Cache Management for Multi-Core Real-Time Virtualization</title><author>Kim, Hyoseung ; Rajkumar, Ragunathan (Raj)</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c225t-56bad9c0cb34b41150cf85bf05dfb7339e44572d7d25707df053fdb9ce6a55073</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Hyoseung</creatorcontrib><creatorcontrib>Rajkumar, Ragunathan (Raj)</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on embedded computing systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Hyoseung</au><au>Rajkumar, Ragunathan (Raj)</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Predictable Shared Cache Management for Multi-Core Real-Time Virtualization</atitle><jtitle>ACM transactions on embedded computing systems</jtitle><date>2018-01-31</date><risdate>2018</risdate><volume>17</volume><issue>1</issue><spage>1</spage><epage>27</epage><pages>1-27</pages><issn>1539-9087</issn><eissn>1558-3465</eissn><abstract>Real-time virtualization has gained much attention for the consolidation of multiple real-time systems onto a single hardware platform while ensuring timing predictability. However, a shared last-level cache (LLC) on modern multi-core platforms can easily hamper the timing predictability of real-time virtualization due to the resulting temporal interference among consolidated workloads. Since such interference caused by the LLC is highly variable and may have not even existed in legacy systems to be consolidated, it poses a significant challenge for real-time virtualization. In this article, we propose a predictable shared cache management framework for multi-core real-time virtualization. Our framework introduces two hypervisor-level techniques, vLLC and vColoring, that enable the cache allocation of individual tasks running in a virtual machine (VM), which is not achievable by the current state of the art. Our framework also provides a cache management scheme that determines cache allocation to tasks, designs VMs in a cache-aware manner, and minimizes the aggregated utilization of VMs to be consolidated. As a proof of concept, we implemented vLLC and vColoring in the KVM hypervisor running on x86 and ARM multi-core platforms. Experimental results with three different guest OSs (i.e., Linux/RK, vanilla Linux, and MS Windows Embedded) show that our techniques can effectively control the cache allocation of tasks in VMs. Our cache management scheme yields a significant utilization benefit compared to other approaches while satisfying timing constraints.</abstract><doi>10.1145/3092946</doi><tpages>27</tpages><orcidid>https://orcid.org/0000-0002-8553-732X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1539-9087
ispartof ACM transactions on embedded computing systems, 2018-01, Vol.17 (1), p.1-27
issn 1539-9087
1558-3465
language eng
recordid cdi_crossref_primary_10_1145_3092946
source ACM Digital Library Complete
title Predictable Shared Cache Management for Multi-Core Real-Time Virtualization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T06%3A27%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Predictable%20Shared%20Cache%20Management%20for%20Multi-Core%20Real-Time%20Virtualization&rft.jtitle=ACM%20transactions%20on%20embedded%20computing%20systems&rft.au=Kim,%20Hyoseung&rft.date=2018-01-31&rft.volume=17&rft.issue=1&rft.spage=1&rft.epage=27&rft.pages=1-27&rft.issn=1539-9087&rft.eissn=1558-3465&rft_id=info:doi/10.1145/3092946&rft_dat=%3Ccrossref%3E10_1145_3092946%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true