Improved data transfer efficiency for scale‐out heterogeneous workloads using on‐the‐fly I/O link compression

Summary Graphics processing units (GPUs) are unarguably vital to keep up with the perpetually growing demand for compute capacity of data‐intensive applications. However, the overhead of transferring data between host and GPU memory is already a major limiting factor on the single‐node level. The si...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Concurrency and computation 2023-05, Vol.35 (11), p.n/a
Hauptverfasser: Plauth, Max, Bruguera Micó, Joan, Polze, Andreas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Summary Graphics processing units (GPUs) are unarguably vital to keep up with the perpetually growing demand for compute capacity of data‐intensive applications. However, the overhead of transferring data between host and GPU memory is already a major limiting factor on the single‐node level. The situation intensifies in scale‐out scenarios, where data movement is becoming even more expensive. By augmenting the CloudCL framework with 842‐based compression facilities, this article demonstrates that transparent on‐the‐fly I/O link compression can yield performance improvements between 1.11× and 2.07× across tested scale‐out GPU workloads.
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.6101