Energy-Efficient Shared Cache Using Way Prediction Based on Way Access Dominance Detection

To meet the performance demands of chip multiprocessors, chip designers have increased the capacity and hierarchy of cache memories. Accordingly, a shared lower-level cache reduces conflict misses by adopting a multi-way set-associative structure with high associativity. This structure allows fast a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2021, Vol.9, p.155048-155057
Hauptverfasser: Oh, Yun-Seok, Chung, Eui-Young
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:To meet the performance demands of chip multiprocessors, chip designers have increased the capacity and hierarchy of cache memories. Accordingly, a shared lower-level cache reduces conflict misses by adopting a multi-way set-associative structure with high associativity. This structure allows fast access because it allows access to all the ways in the cache set in parallel. However, it consumes a large amount of dynamic energy. Therefore, various schemes have been proposed to increase the energy efficiency of the cache memory. These schemes use way prediction or partial comparison to reduce unnecessary way access. This paper proposes a way prediction algorithm suitable for a shared second-level cache with high associativity. This algorithm is based on real-time way access dominance detection ( WADD ). Through this detection, the proposed algorithm can determine the number and location of way candidates suitable for each partial access pattern among the fragmented access patterns owing to the first-level cache replacement policy and intermingled accesses by multiple cores. Through this process, the proposed algorithm can implement an efficient way prediction . Simulation results show that the WADD exhibits the highest energy efficiency among the comparison groups, thus reducing the energy-delay product by 13.5% compared with the conventional cache without way prediction . This result is achieved by reducing the way prediction penalty through fast detection and high prediction accuracy.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3126739