Performance and Accuracy Tradeoffs for Training Graph Neural Networks on ReRAM-Based Architectures

Graph neural network (GNN) is a variant of deep neural networks (DNNs) operating on graphs. However, GNNs are more complex compared with DNNs as they simultaneously exhibit attributes of both DNN and graph computations. In this work, we propose a ReRAM-based 3-D manycore processing-in-memory archite...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on very large scale integration (VLSI) systems 2021-10, Vol.29 (10), p.1743-1756
Hauptverfasser: Arka, Aqeeb Iqbal, Joardar, Biresh Kumar, Doppa, Janardhan Rao, Pande, Partha Pratim, Chakrabarty, Krishnendu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Graph neural network (GNN) is a variant of deep neural networks (DNNs) operating on graphs. However, GNNs are more complex compared with DNNs as they simultaneously exhibit attributes of both DNN and graph computations. In this work, we propose a ReRAM-based 3-D manycore processing-in-memory architecture called ReMaGN, tailored for on-chip training of GNNs. ReMaGN implements GNN training using reduced-precision representation to make the computation faster and reduce the load on the communication backbone. However, reduced precision can potentially compromise the accuracy of training. Hence, we undertake a study of performance and accuracy tradeoffs in such architectures. We demonstrate that ReMaGN outperforms conventional GPUs by up to 9.5\times (on average 7.1\times ) in terms of execution time, while being up to 42\times (on average 33.5\times ) more energy efficient without sacrificing accuracy.
ISSN:1063-8210
1557-9999
DOI:10.1109/TVLSI.2021.3110721