Runtime Support for Accelerating CNN Models on Digital DRAM Processing-in-Memory Hardware

Processing-in-memory (PIM) provides promising solutions to the main memory bottleneck by placing computational logic in or near memory devices to reduce data movement overheads. Recent work explored how commercial DRAM can feature digital PIM logic while meeting fab-level energy and area constraints...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE computer architecture letters 2022-07, Vol.21 (2), p.33-36
Hauptverfasser: Shin, Yongwon, Park, Juseong, Hong, Jeongmin, Sung, Hyojin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Processing-in-memory (PIM) provides promising solutions to the main memory bottleneck by placing computational logic in or near memory devices to reduce data movement overheads. Recent work explored how commercial DRAM can feature digital PIM logic while meeting fab-level energy and area constraints, and showed a significant speedup in the inference time of data-intensive deep learning models. However, convolutional neural network (CNN) models were not considered as main targets for the commercial DRAM-PIM due to their compute-intensive convolution layers. Moreover, recent studies revealed that the area and power constraints on memory die prevent DRAM-PIM from competing with GPUs and specialized accelerators in accelerating them. Recently, mobile CNN models have increasingly adopted a composition of depthwise and pointwise convolutions instead of such compute-intensive convolutions to reduce computation cost without accuracy drop. In this paper, we show that 1x1 convolution can be offloaded for PIM acceleration with integrated runtime support and without any hardware or algorithm changes. We provide further speedup with parallel execution on GPU and DRAM-PIM and code generation optimizations. Our solution achieves up to 35.2% (31.6% on average) speedup for all 1x1 convolutions for mobile CNN models against GPU.
ISSN:1556-6056
1556-6064
DOI:10.1109/LCA.2022.3182363