Adaptive Page Migration Policy With Huge Pages in Tiered Memory Systems
To accommodate the growing demand for memory capacity in a cost-effective way, multiple types of memory are incorporated in a single system. In such tiered memory systems consisting of small fast and large slow memory components, accurately identifying the performance importance of pages is critical...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on computers 2022-01, Vol.71 (1), p.53-68 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To accommodate the growing demand for memory capacity in a cost-effective way, multiple types of memory are incorporated in a single system. In such tiered memory systems consisting of small fast and large slow memory components, accurately identifying the performance importance of pages is critical to properly migrate hot pages to fast memory. Meanwhile, growing address translation cost due to the increasing memory footprints, helped adopting huge pages in common systems. Although such page hotness identification problems have existed for a long time, this article revisits the problem in the new context of tiered memory systems and huge pages. This article first investigates the memory locality behaviors of applications with three potential migration polices, least-recently-used (LRU), least-frequently-used (LFU), and random with huge pages. The evaluation shows that none of the three migration policies excel the others, as the effectiveness of each policy depends on application behaviors. In addition, the results show huge pages can be effective even with page migration, if a proper migration policy is used. Based on the observation, this paper proposes a novel dynamic policy selection mechanism, which identifies the best migration policy for a given workload. It allows multiple concurrently running workloads to adopt different policies. To find the optimal one for each workload, this study first identifies key features that must be inferred from limited approximate memory access information collected using accessed bits in page tables. In addition, it proposes a parallel emulation of alternative policies to assess the benefit of possible alternatives. The proposed dynamic policy selection can achieve 23.8percent performance improvement compared to a prior approximate mechanism based on LRU lists in Linux systems. |
---|---|
ISSN: | 0018-9340 1557-9956 |
DOI: | 10.1109/TC.2020.3036686 |