Why Does Data Prefetching Not Work for Modern Workloads?

Emerging cloud workloads in today's modern data centers have large memory footprints that make the processor's caches to be ineffective. Since L1 data cache is in the critical path, high data cache miss rates degrade the performance. To fix the issue in traditional workloads, data prefetch...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer journal 2016-02, Vol.59 (2), p.244-259
Hauptverfasser: Naderan-Tahan, Mahmood, Sarbazi-Azad, Hamid
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Emerging cloud workloads in today's modern data centers have large memory footprints that make the processor's caches to be ineffective. Since L1 data cache is in the critical path, high data cache miss rates degrade the performance. To fix the issue in traditional workloads, data prefetchers predict the needed data to hide the memory latency and ultimately improve performance. In this paper, we focus on the L1 data cache to answer the question on why state-of-the-art prefetching methods are inefficient for modern workloads in terms of performance and energy consumption? This is because L1 cache is the most important player affecting the processor performance. Results show that, on the one hand, these workloads suffer from low temporal locality and address repetition, and their consecutive accesses exhibit better spatial locality compared with the traditional workloads. On the other hand, miss patterns are spatially irregular and there is little opportunity to eliminate repetitive miss patterns. Because of these reasons, prefetching methods have poor performance and, with respect to more accesses to the lower memory levels, it is not energy efficient to use prefetchers for modern workloads.
ISSN:0010-4620
1460-2067
DOI:10.1093/comjnl/bxv112