Serpens: A High Bandwidth Memory Based Accelerator for General-Purpose Sparse Matrix-Vector Multiplication
Sparse matrix-vector multiplication (SpMV) multiplies a sparse matrix with a dense vector. SpMV plays a crucial role in many applications, from graph analytics to deep learning. The random memory accesses of the sparse matrix make accelerator design challenging. However, high bandwidth memory (HBM)...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Sparse matrix-vector multiplication (SpMV) multiplies a sparse matrix with a
dense vector. SpMV plays a crucial role in many applications, from graph
analytics to deep learning. The random memory accesses of the sparse matrix
make accelerator design challenging. However, high bandwidth memory (HBM) based
FPGAs are a good fit for designing accelerators for SpMV. In this paper, we
present Serpens, an HBM based accelerator for general-purpose SpMV.Serpens
features (1) a general-purpose design, (2) memory-centric processing engines,
and (3) index coalescing to support the efficient processing of arbitrary
SpMVs. From the evaluation of twelve large-size matrices, Serpens is 1.91x and
1.76x better in terms of geomean throughput than the latest accelerators
GraphLiLy and Sextans, respectively. We also evaluate 2,519 SuiteSparse
matrices, and Serpens achieves 2.10x higher throughput than a K80 GPU. For the
energy/bandwidth efficiency, Serpens is 1.71x/1.99x, 1.90x/2.69x, and
6.25x/4.06x better compared with GraphLily, Sextans, and K80, respectively.
After scaling up to 24 HBM channels, Serpens achieves up to 60.55~GFLOP/s
(30,204~MTEPS) and up to 3.79x over GraphLily. The code is available at
https://github.com/UCLA-VAST/Serpens. |
---|---|
DOI: | 10.48550/arxiv.2111.12555 |