Technological Design of 3D NAND-Based Compute-in-Memory Architecture for GB-Scale Deep Neural Network

In this work, a heterogeneous integration strategy of 3D NAND based compute-in-memory (CIM) architecture is proposed for large-scale deep neural networks (DNNs). While most of the reported CIM architectures today have focused on the image classification models with MB-level parameters, we aim at hug...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE electron device letters 2021-02, Vol.42 (2), p.160-163
Hauptverfasser: Shim, Wonbo, Yu, Shimeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this work, a heterogeneous integration strategy of 3D NAND based compute-in-memory (CIM) architecture is proposed for large-scale deep neural networks (DNNs). While most of the reported CIM architectures today have focused on the image classification models with MB-level parameters, we aim at huge language translation models with GB-scale parameters. Our 3D NAND CIM architecture design exploits two fabrication techniques, wafer bonding scheme and CMOS under array (CUA), to integrate CMOS circuits, 3D NAND cells, and high voltage (HV) transistors at different tiers without thermal budget issue during the fabrication process. The bonding pads between two wafers are designed to transfer the input and output vectors while ensuring \sim 1~\mu \text{m} pitch that is feasible by hybrid bonding. The chip size of the 512 Gb 128-layer 3D NAND CIM architecture is estimated to be 166 mm 2 with 7 nm FinFET logic transistors. Using the physical and electrical parameters of standard 3D NAND cells, the 1.15-19.01 tera operations per second per watt (TOPS/W) of energy efficiency is achieved.
ISSN:0741-3106
1558-0563
DOI:10.1109/LED.2020.3048101