LMT: Accurate and Resource-Scalable Slowdown Prediction

Multi-core processors suffer from inter-application interference which makes the performance of an application depend on the behavior of the applications it happens to be co-scheduled with. This results in performance variability, which is undesirable, and researchers have hence proposed numerous sc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE computer architecture letters 2022-07, Vol.21 (2), p.1-4
Hauptverfasser: Salvesen, Peter, Jahre, Magnus
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multi-core processors suffer from inter-application interference which makes the performance of an application depend on the behavior of the applications it happens to be co-scheduled with. This results in performance variability, which is undesirable, and researchers have hence proposed numerous schemes for predicting the performance slowdown caused by inter-application interference. While a slowdown predictor's primary objective is to achieve high accuracy, it must typically also respect resource constraints. It is hence beneficial to be able to scale the resource consumption of the predictor, but state-of-the-art slowdown predictors are not resource-scalable. We hence propose to construct predictors using Linear Model Trees (LMTs) which we show to be accurate and resource-scalable. More specifically, our 40-leaf-node LMT-40 predictor yields a 6.6% prediction error compared the 8.4% error of state-of-the-art GDP at similar storage overhead. In contrast, our LMT-10 predictor reduces storage overhead by 34.6% compared to GDP while only increasing prediction error to 9.4%.
ISSN:1556-6056
1556-6064
DOI:10.1109/LCA.2022.3203483