Echo state graph neural networks with analogue random resistive memory arrays

Recent years have witnessed a surge of interest in learning representations of graph-structured data, with applications from social networks to drug discovery. However, graph neural networks, the machine learning models for handling graph-structured data, face significant challenges when running on...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature machine intelligence 2023-02, Vol.5 (2), p.104-113
Hauptverfasser: Wang, Shaocong, Li, Yi, Wang, Dingchen, Zhang, Woyu, Chen, Xi, Dong, Danian, Wang, Songqi, Zhang, Xumeng, Lin, Peng, Gallicchio, Claudio, Xu, Xiaoxin, Liu, Qi, Cheng, Kwang-Ting, Wang, Zhongrui, Shang, Dashan, Liu, Ming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent years have witnessed a surge of interest in learning representations of graph-structured data, with applications from social networks to drug discovery. However, graph neural networks, the machine learning models for handling graph-structured data, face significant challenges when running on conventional digital hardware, including the slowdown of Moore’s law due to transistor scaling limits and the von Neumann bottleneck incurred by physically separated memory and processing units, as well as a high training cost. Here we present a hardware–software co-design to address these challenges, by designing an echo state graph neural network based on random resistive memory arrays, which are built from low-cost, nanoscale and stackable resistors for efficient in-memory computing. This approach leverages the intrinsic stochasticity of dielectric breakdown in resistive switching to implement random projections in hardware for an echo state network that effectively minimizes the training complexity thanks to its fixed and random weights. The system demonstrates state-of-the-art performance on both graph classification using the MUTAG and COLLAB datasets and node classification using the CORA dataset, achieving 2.16×, 35.42× and 40.37× improvements in energy efficiency for a projected random resistive memory-based hybrid analogue–digital system over a state-of-the-art graphics processing unit and 99.35%, 99.99% and 91.40% reductions of backward pass complexity compared with conventional graph learning. The results point to a promising direction for next-generation artificial intelligence systems for graph learning. Co-designing hardware platforms and neural network software can help improve the computational efficiency and training affordability of deep learning implementations. A new approach designed for graph learning with echo state neural networks makes use of in-memory computing with resistive memory and shows up to a 35 times improvement in the energy efficiency and 99% reduction in training cost for graph classification on large datasets.
ISSN:2522-5839
2522-5839
DOI:10.1038/s42256-023-00609-5