DiskGNN: Bridging I/O Efficiency and Model Accuracy for Out-of-Core GNN Training
Graph neural networks (GNNs) are machine learning models specialized for graph data and widely used in many applications. To train GNNs on large graphs that exceed CPU memory, several systems store data on disk and conduct out-of-core processing. However, these systems suffer from either read amplif...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph neural networks (GNNs) are machine learning models specialized for
graph data and widely used in many applications. To train GNNs on large graphs
that exceed CPU memory, several systems store data on disk and conduct
out-of-core processing. However, these systems suffer from either read
amplification when reading node features that are usually smaller than a disk
page or degraded model accuracy by treating the graph as disconnected
partitions. To close this gap, we build a system called DiskGNN, which achieves
high I/O efficiency and thus fast training without hurting model accuracy. The
key technique used by DiskGNN is offline sampling, which helps decouple graph
sampling from model computation. In particular, by conducting graph sampling
beforehand, DiskGNN acquires the node features that will be accessed by model
computation, and such information is utilized to pack the target node features
contiguously on disk to avoid read amplification. Besides, \name{} also adopts
designs including four-level feature store to fully utilize the memory
hierarchy to cache node features and reduce disk access, batched packing to
accelerate the feature packing process, and pipelined training to overlap disk
access with other operations. We compare DiskGNN with Ginex and MariusGNN,
which are state-of-the-art systems for out-of-core GNN training. The results
show that DiskGNN can speed up the baselines by over 8x while matching their
best model accuracy. |
---|---|
DOI: | 10.48550/arxiv.2405.05231 |