Proactively Breaking Large Pages to Improve Memory Overcommitment Performance in VMware ESXi

VMware ESXi leverages hardware support for MMU virtualization available in modern Intel/AMD CPUs. To optimize address translation performance when running on such CPUs, ESXi preferably uses host large pages (2MB in x86-64 systems) to back VM's guest memory. While using host large pages provides...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:SIGPLAN notices 2015-08, Vol.50 (7), p.39-51
Hauptverfasser: Guo, Fei, Kim, Seongbeom, Baskakov, Yury, Banerjee, Ishan
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 51
container_issue 7
container_start_page 39
container_title SIGPLAN notices
container_volume 50
creator Guo, Fei
Kim, Seongbeom
Baskakov, Yury
Banerjee, Ishan
description VMware ESXi leverages hardware support for MMU virtualization available in modern Intel/AMD CPUs. To optimize address translation performance when running on such CPUs, ESXi preferably uses host large pages (2MB in x86-64 systems) to back VM's guest memory. While using host large pages provides best performance when host has sufficient free memory, it increases host memory pressure and effectively defeats page sharing. Hence, the host is more likely to hit the point where ESXi has to reclaim VM memory through much more expensive techniques such as ballooning or host swapping. As a result, using host large pages may significantly hurt consolidation ratio. To deal with this problem, we propose a new host large page management policy that allows to: a) identify 'cold' large pages and break them even when host has plenty of free memory; b) break all large pages proactively when host free memory becomes scarce, but before the host starts ballooning or swapping; c) reclaim the small pages within the broken large pages through page sharing. With the new policy, the shareable small pages can be shared much earlier and the amount of memory that needs to be ballooned or swapped can be largely reduced when host memory pressure is high. We also propose an algorithm to dynamically adjust the page sharing rate when proactively breaking large pages using a VM large page shareability estimator for higher efficiency. Experimental results show that the proposed large page management policy can improve the performance of various workloads up to 2.1x by significantly reducing the amount of ballooned or swapped memory when host memory pressure is high. Applications still fully benefit from host large pages when memory pressure is low.
doi_str_mv 10.1145/2817817.2731187
format Article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1145_2817817_2731187</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1145_2817817_2731187</sourcerecordid><originalsourceid>FETCH-LOGICAL-c125t-52003e7142f0facfeaa8dbf778a42803d056888234c08d32eeb3f73b341850f73</originalsourceid><addsrcrecordid>eNotkE1rAjEURUNpodZ23W3-wOh7ycRk24ptBUWhH3RRGGJ8kWnNRN4MFv-9lgoX7l0dLkeIe4QBYmmGyqE9ZaCsRnT2QvTQGFcgjuBS9ECPVIG6hGtx07bfAKBBuZ74WnL2oav3tD3IRyb_UzcbOfO8Ibn0G2pll-U07TjvSc4pZT7IxZ445JTqLlHTySVxzJx8E0jWjfyY_3omOXn9rG_FVfTblu7O3RfvT5O38UsxWzxPxw-zIqAyXWHU6Q1ZLFWE6EMk7916Fa11vlQO9BrMyDmndBnArbUiWulo9UqX6AycVl8M_7mBc9syxWrHdfJ8qBCqPznVWU51lqOPgddXBg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Proactively Breaking Large Pages to Improve Memory Overcommitment Performance in VMware ESXi</title><source>Access via ACM Digital Library</source><creator>Guo, Fei ; Kim, Seongbeom ; Baskakov, Yury ; Banerjee, Ishan</creator><creatorcontrib>Guo, Fei ; Kim, Seongbeom ; Baskakov, Yury ; Banerjee, Ishan</creatorcontrib><description>VMware ESXi leverages hardware support for MMU virtualization available in modern Intel/AMD CPUs. To optimize address translation performance when running on such CPUs, ESXi preferably uses host large pages (2MB in x86-64 systems) to back VM's guest memory. While using host large pages provides best performance when host has sufficient free memory, it increases host memory pressure and effectively defeats page sharing. Hence, the host is more likely to hit the point where ESXi has to reclaim VM memory through much more expensive techniques such as ballooning or host swapping. As a result, using host large pages may significantly hurt consolidation ratio. To deal with this problem, we propose a new host large page management policy that allows to: a) identify 'cold' large pages and break them even when host has plenty of free memory; b) break all large pages proactively when host free memory becomes scarce, but before the host starts ballooning or swapping; c) reclaim the small pages within the broken large pages through page sharing. With the new policy, the shareable small pages can be shared much earlier and the amount of memory that needs to be ballooned or swapped can be largely reduced when host memory pressure is high. We also propose an algorithm to dynamically adjust the page sharing rate when proactively breaking large pages using a VM large page shareability estimator for higher efficiency. Experimental results show that the proposed large page management policy can improve the performance of various workloads up to 2.1x by significantly reducing the amount of ballooned or swapped memory when host memory pressure is high. Applications still fully benefit from host large pages when memory pressure is low.</description><identifier>ISSN: 0362-1340</identifier><identifier>EISSN: 1558-1160</identifier><identifier>DOI: 10.1145/2817817.2731187</identifier><language>eng</language><ispartof>SIGPLAN notices, 2015-08, Vol.50 (7), p.39-51</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c125t-52003e7142f0facfeaa8dbf778a42803d056888234c08d32eeb3f73b341850f73</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Guo, Fei</creatorcontrib><creatorcontrib>Kim, Seongbeom</creatorcontrib><creatorcontrib>Baskakov, Yury</creatorcontrib><creatorcontrib>Banerjee, Ishan</creatorcontrib><title>Proactively Breaking Large Pages to Improve Memory Overcommitment Performance in VMware ESXi</title><title>SIGPLAN notices</title><description>VMware ESXi leverages hardware support for MMU virtualization available in modern Intel/AMD CPUs. To optimize address translation performance when running on such CPUs, ESXi preferably uses host large pages (2MB in x86-64 systems) to back VM's guest memory. While using host large pages provides best performance when host has sufficient free memory, it increases host memory pressure and effectively defeats page sharing. Hence, the host is more likely to hit the point where ESXi has to reclaim VM memory through much more expensive techniques such as ballooning or host swapping. As a result, using host large pages may significantly hurt consolidation ratio. To deal with this problem, we propose a new host large page management policy that allows to: a) identify 'cold' large pages and break them even when host has plenty of free memory; b) break all large pages proactively when host free memory becomes scarce, but before the host starts ballooning or swapping; c) reclaim the small pages within the broken large pages through page sharing. With the new policy, the shareable small pages can be shared much earlier and the amount of memory that needs to be ballooned or swapped can be largely reduced when host memory pressure is high. We also propose an algorithm to dynamically adjust the page sharing rate when proactively breaking large pages using a VM large page shareability estimator for higher efficiency. Experimental results show that the proposed large page management policy can improve the performance of various workloads up to 2.1x by significantly reducing the amount of ballooned or swapped memory when host memory pressure is high. Applications still fully benefit from host large pages when memory pressure is low.</description><issn>0362-1340</issn><issn>1558-1160</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><recordid>eNotkE1rAjEURUNpodZ23W3-wOh7ycRk24ptBUWhH3RRGGJ8kWnNRN4MFv-9lgoX7l0dLkeIe4QBYmmGyqE9ZaCsRnT2QvTQGFcgjuBS9ECPVIG6hGtx07bfAKBBuZ74WnL2oav3tD3IRyb_UzcbOfO8Ibn0G2pll-U07TjvSc4pZT7IxZ445JTqLlHTySVxzJx8E0jWjfyY_3omOXn9rG_FVfTblu7O3RfvT5O38UsxWzxPxw-zIqAyXWHU6Q1ZLFWE6EMk7916Fa11vlQO9BrMyDmndBnArbUiWulo9UqX6AycVl8M_7mBc9syxWrHdfJ8qBCqPznVWU51lqOPgddXBg</recordid><startdate>20150825</startdate><enddate>20150825</enddate><creator>Guo, Fei</creator><creator>Kim, Seongbeom</creator><creator>Baskakov, Yury</creator><creator>Banerjee, Ishan</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20150825</creationdate><title>Proactively Breaking Large Pages to Improve Memory Overcommitment Performance in VMware ESXi</title><author>Guo, Fei ; Kim, Seongbeom ; Baskakov, Yury ; Banerjee, Ishan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c125t-52003e7142f0facfeaa8dbf778a42803d056888234c08d32eeb3f73b341850f73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Guo, Fei</creatorcontrib><creatorcontrib>Kim, Seongbeom</creatorcontrib><creatorcontrib>Baskakov, Yury</creatorcontrib><creatorcontrib>Banerjee, Ishan</creatorcontrib><collection>CrossRef</collection><jtitle>SIGPLAN notices</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Guo, Fei</au><au>Kim, Seongbeom</au><au>Baskakov, Yury</au><au>Banerjee, Ishan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Proactively Breaking Large Pages to Improve Memory Overcommitment Performance in VMware ESXi</atitle><jtitle>SIGPLAN notices</jtitle><date>2015-08-25</date><risdate>2015</risdate><volume>50</volume><issue>7</issue><spage>39</spage><epage>51</epage><pages>39-51</pages><issn>0362-1340</issn><eissn>1558-1160</eissn><abstract>VMware ESXi leverages hardware support for MMU virtualization available in modern Intel/AMD CPUs. To optimize address translation performance when running on such CPUs, ESXi preferably uses host large pages (2MB in x86-64 systems) to back VM's guest memory. While using host large pages provides best performance when host has sufficient free memory, it increases host memory pressure and effectively defeats page sharing. Hence, the host is more likely to hit the point where ESXi has to reclaim VM memory through much more expensive techniques such as ballooning or host swapping. As a result, using host large pages may significantly hurt consolidation ratio. To deal with this problem, we propose a new host large page management policy that allows to: a) identify 'cold' large pages and break them even when host has plenty of free memory; b) break all large pages proactively when host free memory becomes scarce, but before the host starts ballooning or swapping; c) reclaim the small pages within the broken large pages through page sharing. With the new policy, the shareable small pages can be shared much earlier and the amount of memory that needs to be ballooned or swapped can be largely reduced when host memory pressure is high. We also propose an algorithm to dynamically adjust the page sharing rate when proactively breaking large pages using a VM large page shareability estimator for higher efficiency. Experimental results show that the proposed large page management policy can improve the performance of various workloads up to 2.1x by significantly reducing the amount of ballooned or swapped memory when host memory pressure is high. Applications still fully benefit from host large pages when memory pressure is low.</abstract><doi>10.1145/2817817.2731187</doi><tpages>13</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0362-1340
ispartof SIGPLAN notices, 2015-08, Vol.50 (7), p.39-51
issn 0362-1340
1558-1160
language eng
recordid cdi_crossref_primary_10_1145_2817817_2731187
source Access via ACM Digital Library
title Proactively Breaking Large Pages to Improve Memory Overcommitment Performance in VMware ESXi
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T02%3A58%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Proactively%20Breaking%20Large%20Pages%20to%20Improve%20Memory%20Overcommitment%20Performance%20in%20VMware%20ESXi&rft.jtitle=SIGPLAN%20notices&rft.au=Guo,%20Fei&rft.date=2015-08-25&rft.volume=50&rft.issue=7&rft.spage=39&rft.epage=51&rft.pages=39-51&rft.issn=0362-1340&rft.eissn=1558-1160&rft_id=info:doi/10.1145/2817817.2731187&rft_dat=%3Ccrossref%3E10_1145_2817817_2731187%3C/crossref%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true