HC-SpMM: Accelerating Sparse Matrix-Matrix Multiplication for Graphs with Hybrid GPU Cores
Sparse Matrix-Matrix Multiplication (SpMM) is a fundamental operation in graph computing and analytics. However, the irregularity of real-world graphs poses significant challenges to achieving efficient SpMM operation for graph data on GPUs. Recently, significant advancements in GPU computing power...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sparse Matrix-Matrix Multiplication (SpMM) is a fundamental operation in
graph computing and analytics. However, the irregularity of real-world graphs
poses significant challenges to achieving efficient SpMM operation for graph
data on GPUs. Recently, significant advancements in GPU computing power and the
introduction of new efficient computing cores within GPUs offer new
opportunities for acceleration. In this paper, we present HC-SpMM, a pioneering
algorithm that leverages hybrid GPU cores (Tensor cores and CUDA cores) to
accelerate SpMM for graphs. To adapt to the computing characteristics of
different GPU cores, we investigate the impact of sparse graph features on the
performance of different cores, develop a data partitioning technique for the
graph adjacency matrix, and devise a novel strategy for intelligently selecting
the most efficient cores for processing each submatrix. Additionally, we
optimize it by considering memory access and thread utilization, to utilize the
computational resources to their fullest potential. To support complex graph
computing workloads, we integrate HC-SpMM into the GNN training pipeline.
Furthermore, we propose a kernel fusion strategy to enhance data reuse, as well
as a cost-effective graph layout reorganization method to mitigate the
irregular and sparse issues of real-world graphs, better fitting the
computational models of hybrid GPU cores. Extensive experiments on 14
real-world graph datasets demonstrate that HC-SpMM achieves an average speedup
of 1.33x and 1.23x over state-of-the-art SpMM kernels and GNN frameworks. |
---|---|
DOI: | 10.48550/arxiv.2412.08902 |