Accelerator-as-a-Service in Public Clouds: An Intra-Host Traffic Management View for Performance Isolation in the Wild
I/O devices in public clouds have integrated increasing numbers of hardware accelerators, e.g., AWS Nitro, Azure FPGA and Nvidia BlueField. However, such specialized compute (1) is not explicitly accessible to cloud users with performance guarantee, (2) cannot be leveraged simultaneously by both pro...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | I/O devices in public clouds have integrated increasing numbers of hardware
accelerators, e.g., AWS Nitro, Azure FPGA and Nvidia BlueField. However, such
specialized compute (1) is not explicitly accessible to cloud users with
performance guarantee, (2) cannot be leveraged simultaneously by both providers
and users, unlike general-purpose compute (e.g., CPUs). Through ten
observations, we present that the fundamental difficulty of democratizing
accelerators is insufficient performance isolation support. The key obstacles
to enforcing accelerator isolation are (1) too many unknown traffic patterns in
public clouds and (2) too many possible contention sources in the datapath. In
this work, instead of scheduling such complex traffic on-the-fly and augmenting
isolation support on each system component, we propose to model traffic as
network flows and proactively re-shape the traffic to avoid unpredictable
contention. We discuss the implications of our findings on the design of future
I/O management stacks and device interfaces. |
---|---|
DOI: | 10.48550/arxiv.2407.10098 |