Improving Multi-Instance GPU Efficiency via Sub-Entry Sharing TLB Design
NVIDIA's Multi-Instance GPU (MIG) technology enables partitioning GPU computing power and memory into separate hardware instances, providing complete isolation including compute resources, caches, and memory. However, prior work identifies that MIG does not extend to partitioning the last-level...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | NVIDIA's Multi-Instance GPU (MIG) technology enables partitioning GPU
computing power and memory into separate hardware instances, providing complete
isolation including compute resources, caches, and memory. However, prior work
identifies that MIG does not extend to partitioning the last-level TLB (i.e.,
L3 TLB), which remains shared among all instances. To enhance TLB reach, NVIDIA
GPUs reorganized the TLB structure with 16 sub-entries in each L3 TLB entry
that have a one-to-one mapping to the address translations for 16 pages of size
64KB located within the same 1MB aligned range. Our comprehensive investigation
of address translation efficiency in MIG identifies two main issues caused by
L3 TLB sharing interference: (i) it results in performance degradation for
co-running applications, and (ii) TLB sub-entries are not fully utilized before
eviction. Based on this observation, we propose STAR to improve the utilization
of TLB sub-entries through dynamic sharing of TLB entries across multiple base
addresses. STAR evaluates TLB entries based on their sub-entry utilization to
optimize address translation storage, dynamically adjusting between a shared
and non-shared status to cater to current demand. We show that STAR improves
overall performance by an average of 30.2% across various multi-tenant
workloads. |
---|---|
DOI: | 10.48550/arxiv.2404.18361 |