PNPU: An Energy-Efficient Deep-Neural-Network Learning Processor With Stochastic Coarse-Fine Level Weight Pruning and Adaptive Input/Output/Weight Zero Skipping

Recently, deep-neural-network (DNN) learning processors for edge devices have been proposed, but they cannot reduce the complexity of over-parameterized network during training. Also, they cannot support energy-efficient zero-skipping because previous methods cannot be performed perfectly in backpro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE solid-state circuits letters 2021, Vol.4, p.22-25
Hauptverfasser: Kim, Sangyeob, Lee, Juhyoung, Kang, Sanghoon, Lee, Jinmook, Jo, Wooyoung, Yoo, Hoi-Jun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently, deep-neural-network (DNN) learning processors for edge devices have been proposed, but they cannot reduce the complexity of over-parameterized network during training. Also, they cannot support energy-efficient zero-skipping because previous methods cannot be performed perfectly in backpropagation and weight gradient update. In this letter, energy-efficient DNN learning processor PNPU is proposed with three key features: 1) stochastic coarse-fine level pruning; 2) adaptive input, output, weight zero skipping; and 3) weight pruning unit with weight sparsity balancer. As a result, PNPU shows 3.14-278.39 TFLOPS/W energy efficiency, at 0.78 V and 50 MHz with FP8 and 0%-90% sparsity condition.
ISSN:2573-9603
2573-9603
DOI:10.1109/LSSC.2020.3041497