NAND-SPIN-Based Processing-in-MRAM Architecture for Convolutional Neural Network Acceleration

The performance and efficiency of running large-scale datasets on traditional computing systems exhibit critical bottlenecks due to the existing "power wall" and "memory wall" problems. To resolve those problems, processing-in-memory (PIM) architectures are developed to bring com...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-04
Hauptverfasser: Zhao, Yinglin, Yang, Jianlei, Li, Bing, Cheng, Xingzhou, Ye, Xucheng, Wang, Xueyan, Jia, Xiaotao, Wang, Zhaohao, Zhang, Youguang, Zhao, Weisheng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The performance and efficiency of running large-scale datasets on traditional computing systems exhibit critical bottlenecks due to the existing "power wall" and "memory wall" problems. To resolve those problems, processing-in-memory (PIM) architectures are developed to bring computation logic in or near memory to alleviate the bandwidth limitations during data transmission. NAND-like spintronics memory (NAND-SPIN) is one kind of promising magnetoresistive random-access memory (MRAM) with low write energy and high integration density, and it can be employed to perform efficient in-memory computation operations. In this work, we propose a NAND-SPIN-based PIM architecture for efficient convolutional neural network (CNN) acceleration. A straightforward data mapping scheme is exploited to improve the parallelism while reducing data movements. Benefiting from the excellent characteristics of NAND-SPIN and in-memory processing architecture, experimental results show that the proposed approach can achieve \(\sim\)2.6\(\times\) speedup and \(\sim\)1.4\(\times\) improvement in energy efficiency over state-of-the-art PIM solutions.
ISSN:2331-8422
DOI:10.48550/arxiv.2204.09989