Methodology for Fine-Grain GPU Power Visibility and Insights
Ubiquity of AI coupled with its steep power demands make optimizing GPU power a priority as large GPU-based clusters are often employed to train and serve AI models. An important first step in optimizing GPU power consumption is high-fidelity and fine-grain power measurement of key AI computations o...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Ubiquity of AI coupled with its steep power demands make optimizing GPU power
a priority as large GPU-based clusters are often employed to train and serve AI
models. An important first step in optimizing GPU power consumption is
high-fidelity and fine-grain power measurement of key AI computations on GPUs.
To this end, we observe that as GPUs get more powerful, the resulting
sub-millisecond to millisecond executions make fine-grain power analysis
challenging. In this work, we first carefully identify the challenges in
obtaining fine-grain GPU power profiles. To address these challenges, we devise
FinGraV methodology where we employ execution time binning, careful CPU-GPU
time synchronization, and power profile differentiation to collect fine-grain
GPU power profiles across prominent AI computations and across spectrum of
scenarios. Using FinGraV power profiles, we make several observations
pertaining to GPU power variation over executions and over time, GPU
sub-component power consumptions across different scenarios, and power behavior
over interleaved executions of multiple computations. Equipped with these
observations, we conclude with several recommendations to optimize the power
for these ubiquitous accelerators. |
---|---|
DOI: | 10.48550/arxiv.2412.12426 |