Differential FCM: increasing value prediction accuracy by improving table usage efficiency
Value prediction is a relatively new technique to increase the Instruction Level Parallelism (ILP) in future microprocessors. An important problem when designing a value predictor is efficiency, an accurate predictor requires huge prediction tables. This is especially the case for the finite context...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Value prediction is a relatively new technique to increase the Instruction Level Parallelism (ILP) in future microprocessors. An important problem when designing a value predictor is efficiency, an accurate predictor requires huge prediction tables. This is especially the case for the finite context method (FCM) predictor the most accurate one. In this paper, we show that the prediction accuracy of the FCM can be greatly improved by making the FCM predict studies instead of values. This new predictor is called the differential finite context method (DFCM) predictor. The DFCM predictor outperforms a similar FCM predictor by as much as 33%, depending on the prediction table size. If we take the additional storage into account, the difference is still 15% for realistic predictor sizes. We use several metrics to show that the key to this success is reduced aliasing in the level-2 table. We also show that the DFCM is superior to hybrid predictors based on FCM and stride predictors, since its prediction accuracy is higher than that of a hybrid one using a perfect meta-predictor. |
---|---|
ISSN: | 1530-0897 2378-203X |
DOI: | 10.1109/HPCA.2001.903264 |