Adaptive Page Migration Policy With Huge Pages in Tiered Memory Systems

To accommodate the growing demand for memory capacity in a cost-effective way, multiple types of memory are incorporated in a single system. In such tiered memory systems consisting of small fast and large slow memory components, accurately identifying the performance importance of pages is critical...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on computers 2022-01, Vol.71 (1), p.53-68
Hauptverfasser: Heo, Taekyung, Wang, Yang, Cui, Wei, Huh, Jaehyuk, Zhang, Lintao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 68
container_issue 1
container_start_page 53
container_title IEEE transactions on computers
container_volume 71
creator Heo, Taekyung
Wang, Yang
Cui, Wei
Huh, Jaehyuk
Zhang, Lintao
description To accommodate the growing demand for memory capacity in a cost-effective way, multiple types of memory are incorporated in a single system. In such tiered memory systems consisting of small fast and large slow memory components, accurately identifying the performance importance of pages is critical to properly migrate hot pages to fast memory. Meanwhile, growing address translation cost due to the increasing memory footprints, helped adopting huge pages in common systems. Although such page hotness identification problems have existed for a long time, this article revisits the problem in the new context of tiered memory systems and huge pages. This article first investigates the memory locality behaviors of applications with three potential migration polices, least-recently-used (LRU), least-frequently-used (LFU), and random with huge pages. The evaluation shows that none of the three migration policies excel the others, as the effectiveness of each policy depends on application behaviors. In addition, the results show huge pages can be effective even with page migration, if a proper migration policy is used. Based on the observation, this paper proposes a novel dynamic policy selection mechanism, which identifies the best migration policy for a given workload. It allows multiple concurrently running workloads to adopt different policies. To find the optimal one for each workload, this study first identifies key features that must be inferred from limited approximate memory access information collected using accessed bits in page tables. In addition, it proposes a parallel emulation of alternative policies to assess the benefit of possible alternatives. The proposed dynamic policy selection can achieve 23.8percent performance improvement compared to a prior approximate mechanism based on LRU lists in Linux systems.
doi_str_mv 10.1109/TC.2020.3036686
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TC_2020_3036686</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9252863</ieee_id><sourcerecordid>2610171200</sourcerecordid><originalsourceid>FETCH-LOGICAL-c330t-9f8afa5aff248c9ef1a907601a72d11f3eca37c3b943236d383384937b4959503</originalsourceid><addsrcrecordid>eNo9kMFLwzAUxoMoOKdnD14Cnru95DVpcxxFN2HDgRWPIWuTmbGtM-mE_vdudHh6h-_3fQ9-hDwyGDEGalwWIw4cRggoZS6vyIAJkSVKCXlNBgAsTxSmcEvuYtwAgOSgBmQ6qc2h9b-WLs3a0oVfB9P6Zk-XzdZXHf3y7TedHdd9Hqnf09LbYGu6sLsmdPSji63dxXty48w22ofLHZLP15eymCXz9-lbMZknFSK0iXK5cUYY53iaV8o6ZhRkEpjJeM2YQ1sZzCpcqRQ5yhpzxDxVmK1SJZQAHJLnfvcQmp-jja3eNMewP73UXDJgGeNwpsY9VYUmxmCdPgS_M6HTDPTZli4LfbalL7ZOjae-4a21_7Tigp9C_AMaSGN1</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2610171200</pqid></control><display><type>article</type><title>Adaptive Page Migration Policy With Huge Pages in Tiered Memory Systems</title><source>IEEE Electronic Library (IEL)</source><creator>Heo, Taekyung ; Wang, Yang ; Cui, Wei ; Huh, Jaehyuk ; Zhang, Lintao</creator><creatorcontrib>Heo, Taekyung ; Wang, Yang ; Cui, Wei ; Huh, Jaehyuk ; Zhang, Lintao</creatorcontrib><description>To accommodate the growing demand for memory capacity in a cost-effective way, multiple types of memory are incorporated in a single system. In such tiered memory systems consisting of small fast and large slow memory components, accurately identifying the performance importance of pages is critical to properly migrate hot pages to fast memory. Meanwhile, growing address translation cost due to the increasing memory footprints, helped adopting huge pages in common systems. Although such page hotness identification problems have existed for a long time, this article revisits the problem in the new context of tiered memory systems and huge pages. This article first investigates the memory locality behaviors of applications with three potential migration polices, least-recently-used (LRU), least-frequently-used (LFU), and random with huge pages. The evaluation shows that none of the three migration policies excel the others, as the effectiveness of each policy depends on application behaviors. In addition, the results show huge pages can be effective even with page migration, if a proper migration policy is used. Based on the observation, this paper proposes a novel dynamic policy selection mechanism, which identifies the best migration policy for a given workload. It allows multiple concurrently running workloads to adopt different policies. To find the optimal one for each workload, this study first identifies key features that must be inferred from limited approximate memory access information collected using accessed bits in page tables. In addition, it proposes a parallel emulation of alternative policies to assess the benefit of possible alternatives. The proposed dynamic policy selection can achieve 23.8percent performance improvement compared to a prior approximate mechanism based on LRU lists in Linux systems.</description><identifier>ISSN: 0018-9340</identifier><identifier>EISSN: 1557-9956</identifier><identifier>DOI: 10.1109/TC.2020.3036686</identifier><identifier>CODEN: ITCOB4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptive systems ; Bandwidth ; Computer memory ; Emulation ; Hardware ; huge pages ; Kernel ; Linux ; Memory management ; Migration ; page hotness ; page migration ; Policies ; Tiered memory ; Workload ; Workloads</subject><ispartof>IEEE transactions on computers, 2022-01, Vol.71 (1), p.53-68</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c330t-9f8afa5aff248c9ef1a907601a72d11f3eca37c3b943236d383384937b4959503</citedby><cites>FETCH-LOGICAL-c330t-9f8afa5aff248c9ef1a907601a72d11f3eca37c3b943236d383384937b4959503</cites><orcidid>0000-0001-8275-2377 ; 0000-0002-1742-047X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9252863$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>315,781,785,797,27929,27930,54763</link.rule.ids></links><search><creatorcontrib>Heo, Taekyung</creatorcontrib><creatorcontrib>Wang, Yang</creatorcontrib><creatorcontrib>Cui, Wei</creatorcontrib><creatorcontrib>Huh, Jaehyuk</creatorcontrib><creatorcontrib>Zhang, Lintao</creatorcontrib><title>Adaptive Page Migration Policy With Huge Pages in Tiered Memory Systems</title><title>IEEE transactions on computers</title><addtitle>TC</addtitle><description>To accommodate the growing demand for memory capacity in a cost-effective way, multiple types of memory are incorporated in a single system. In such tiered memory systems consisting of small fast and large slow memory components, accurately identifying the performance importance of pages is critical to properly migrate hot pages to fast memory. Meanwhile, growing address translation cost due to the increasing memory footprints, helped adopting huge pages in common systems. Although such page hotness identification problems have existed for a long time, this article revisits the problem in the new context of tiered memory systems and huge pages. This article first investigates the memory locality behaviors of applications with three potential migration polices, least-recently-used (LRU), least-frequently-used (LFU), and random with huge pages. The evaluation shows that none of the three migration policies excel the others, as the effectiveness of each policy depends on application behaviors. In addition, the results show huge pages can be effective even with page migration, if a proper migration policy is used. Based on the observation, this paper proposes a novel dynamic policy selection mechanism, which identifies the best migration policy for a given workload. It allows multiple concurrently running workloads to adopt different policies. To find the optimal one for each workload, this study first identifies key features that must be inferred from limited approximate memory access information collected using accessed bits in page tables. In addition, it proposes a parallel emulation of alternative policies to assess the benefit of possible alternatives. The proposed dynamic policy selection can achieve 23.8percent performance improvement compared to a prior approximate mechanism based on LRU lists in Linux systems.</description><subject>Adaptive systems</subject><subject>Bandwidth</subject><subject>Computer memory</subject><subject>Emulation</subject><subject>Hardware</subject><subject>huge pages</subject><subject>Kernel</subject><subject>Linux</subject><subject>Memory management</subject><subject>Migration</subject><subject>page hotness</subject><subject>page migration</subject><subject>Policies</subject><subject>Tiered memory</subject><subject>Workload</subject><subject>Workloads</subject><issn>0018-9340</issn><issn>1557-9956</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><recordid>eNo9kMFLwzAUxoMoOKdnD14Cnru95DVpcxxFN2HDgRWPIWuTmbGtM-mE_vdudHh6h-_3fQ9-hDwyGDEGalwWIw4cRggoZS6vyIAJkSVKCXlNBgAsTxSmcEvuYtwAgOSgBmQ6qc2h9b-WLs3a0oVfB9P6Zk-XzdZXHf3y7TedHdd9Hqnf09LbYGu6sLsmdPSji63dxXty48w22ofLHZLP15eymCXz9-lbMZknFSK0iXK5cUYY53iaV8o6ZhRkEpjJeM2YQ1sZzCpcqRQ5yhpzxDxVmK1SJZQAHJLnfvcQmp-jja3eNMewP73UXDJgGeNwpsY9VYUmxmCdPgS_M6HTDPTZli4LfbalL7ZOjae-4a21_7Tigp9C_AMaSGN1</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Heo, Taekyung</creator><creator>Wang, Yang</creator><creator>Cui, Wei</creator><creator>Huh, Jaehyuk</creator><creator>Zhang, Lintao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-8275-2377</orcidid><orcidid>https://orcid.org/0000-0002-1742-047X</orcidid></search><sort><creationdate>20220101</creationdate><title>Adaptive Page Migration Policy With Huge Pages in Tiered Memory Systems</title><author>Heo, Taekyung ; Wang, Yang ; Cui, Wei ; Huh, Jaehyuk ; Zhang, Lintao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c330t-9f8afa5aff248c9ef1a907601a72d11f3eca37c3b943236d383384937b4959503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptive systems</topic><topic>Bandwidth</topic><topic>Computer memory</topic><topic>Emulation</topic><topic>Hardware</topic><topic>huge pages</topic><topic>Kernel</topic><topic>Linux</topic><topic>Memory management</topic><topic>Migration</topic><topic>page hotness</topic><topic>page migration</topic><topic>Policies</topic><topic>Tiered memory</topic><topic>Workload</topic><topic>Workloads</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Heo, Taekyung</creatorcontrib><creatorcontrib>Wang, Yang</creatorcontrib><creatorcontrib>Cui, Wei</creatorcontrib><creatorcontrib>Huh, Jaehyuk</creatorcontrib><creatorcontrib>Zhang, Lintao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on computers</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Heo, Taekyung</au><au>Wang, Yang</au><au>Cui, Wei</au><au>Huh, Jaehyuk</au><au>Zhang, Lintao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Page Migration Policy With Huge Pages in Tiered Memory Systems</atitle><jtitle>IEEE transactions on computers</jtitle><stitle>TC</stitle><date>2022-01-01</date><risdate>2022</risdate><volume>71</volume><issue>1</issue><spage>53</spage><epage>68</epage><pages>53-68</pages><issn>0018-9340</issn><eissn>1557-9956</eissn><coden>ITCOB4</coden><abstract>To accommodate the growing demand for memory capacity in a cost-effective way, multiple types of memory are incorporated in a single system. In such tiered memory systems consisting of small fast and large slow memory components, accurately identifying the performance importance of pages is critical to properly migrate hot pages to fast memory. Meanwhile, growing address translation cost due to the increasing memory footprints, helped adopting huge pages in common systems. Although such page hotness identification problems have existed for a long time, this article revisits the problem in the new context of tiered memory systems and huge pages. This article first investigates the memory locality behaviors of applications with three potential migration polices, least-recently-used (LRU), least-frequently-used (LFU), and random with huge pages. The evaluation shows that none of the three migration policies excel the others, as the effectiveness of each policy depends on application behaviors. In addition, the results show huge pages can be effective even with page migration, if a proper migration policy is used. Based on the observation, this paper proposes a novel dynamic policy selection mechanism, which identifies the best migration policy for a given workload. It allows multiple concurrently running workloads to adopt different policies. To find the optimal one for each workload, this study first identifies key features that must be inferred from limited approximate memory access information collected using accessed bits in page tables. In addition, it proposes a parallel emulation of alternative policies to assess the benefit of possible alternatives. The proposed dynamic policy selection can achieve 23.8percent performance improvement compared to a prior approximate mechanism based on LRU lists in Linux systems.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TC.2020.3036686</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-8275-2377</orcidid><orcidid>https://orcid.org/0000-0002-1742-047X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0018-9340
ispartof IEEE transactions on computers, 2022-01, Vol.71 (1), p.53-68
issn 0018-9340
1557-9956
language eng
recordid cdi_crossref_primary_10_1109_TC_2020_3036686
source IEEE Electronic Library (IEL)
subjects Adaptive systems
Bandwidth
Computer memory
Emulation
Hardware
huge pages
Kernel
Linux
Memory management
Migration
page hotness
page migration
Policies
Tiered memory
Workload
Workloads
title Adaptive Page Migration Policy With Huge Pages in Tiered Memory Systems
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-13T13%3A51%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Page%20Migration%20Policy%20With%20Huge%20Pages%20in%20Tiered%20Memory%20Systems&rft.jtitle=IEEE%20transactions%20on%20computers&rft.au=Heo,%20Taekyung&rft.date=2022-01-01&rft.volume=71&rft.issue=1&rft.spage=53&rft.epage=68&rft.pages=53-68&rft.issn=0018-9340&rft.eissn=1557-9956&rft.coden=ITCOB4&rft_id=info:doi/10.1109/TC.2020.3036686&rft_dat=%3Cproquest_cross%3E2610171200%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2610171200&rft_id=info:pmid/&rft_ieee_id=9252863&rfr_iscdi=true