An in-memory computing architecture based on two-dimensional semiconductors for multiply-accumulate operations

In-memory computing may enable multiply-accumulate (MAC) operations, which are the primary calculations used in artificial intelligence (AI). Performing MAC operations with high capacity in a small area with high energy efficiency remains a challenge. In this work, we propose a circuit architecture...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature communications 2021-06, Vol.12 (1), p.3347-3347, Article 3347
Hauptverfasser: Wang, Yin, Tang, Hongwei, Xie, Yufeng, Chen, Xinyu, Ma, Shunli, Sun, Zhengzong, Sun, Qingqing, Chen, Lin, Zhu, Hao, Wan, Jing, Xu, Zihan, Zhang, David Wei, Zhou, Peng, Bao, Wenzhong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In-memory computing may enable multiply-accumulate (MAC) operations, which are the primary calculations used in artificial intelligence (AI). Performing MAC operations with high capacity in a small area with high energy efficiency remains a challenge. In this work, we propose a circuit architecture that integrates monolayer MoS 2 transistors in a two-transistor–one-capacitor (2T-1C) configuration. In this structure, the memory portion is similar to a 1T-1C Dynamic Random Access Memory (DRAM) so that theoretically the cycling endurance and erase/write speed inherit the merits of DRAM. Besides, the ultralow leakage current of the MoS 2 transistor enables the storage of multi-level voltages on the capacitor with a long retention time. The electrical characteristics of a single MoS 2 transistor also allow analog computation by multiplying the drain voltage by the stored voltage on the capacitor. The sum-of-product is then obtained by converging the currents from multiple 2T-1C units. Based on our experiment results, a neural network is ex-situ trained for image recognition with 90.3% accuracy. In the future, such 2T-1C units can potentially be integrated into three-dimensional (3D) circuits with dense logic and memory layers for low power in-situ training of neural networks in hardware. In standard computing architectures, memory and logic circuits are separated, a feature that slows matrix operations vital to deep learning algorithms. Here, the authors present an alternate in-memory architecture and demonstrate a feasible approach for analog matrix multiplication.
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-021-23719-3