CODA: Enabling Co-location of Computation and Data for Multiple GPU Systems

To exploit parallelism and scalability of multiple GPUs in a system, it is critical to place compute and data together. However, two key techniques that have been used to hide memory latency and improve thread-level parallelism (TLP), memory interleaving, and thread block scheduling, in traditional...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on architecture and code optimization 2018-10, Vol.15 (3), p.1-23
Hauptverfasser: Kim, Hyojong, Hadidi, Ramyad, Nai, Lifeng, Kim, Hyesoon, Jayasena, Nuwan, Eckert, Yasuko, Kayiran, Onur, Loh, Gabriel
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!