Energy-Efficient Neural Network Acceleration in the Presence of Bit-Level Memory Errors
As a result of the increasing demand for deep neural network (DNN)-based services, efforts to develop hardware accelerators for DNNs are growing rapidly. However, while highly efficient accelerators on convolutional DNNs (Conv-DNNs) have been developed, less progress has been made with regards to fu...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems. I, Regular papers Regular papers, 2018-12, Vol.65 (12), p.4285-4298 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As a result of the increasing demand for deep neural network (DNN)-based services, efforts to develop hardware accelerators for DNNs are growing rapidly. However, while highly efficient accelerators on convolutional DNNs (Conv-DNNs) have been developed, less progress has been made with regards to fully-connected DNNs. Based on analysis of bit-level SRAM errors, we propose memory adaptive training with in-situ canaries (MATIC), a methodology that enables aggressive voltage scaling of accelerator weight memories to improve the energy-efficiency of DNN accelerators. To enable accurate operation with voltage overscaling, MATIC combines characteristics of SRAM bit failures with the error resilience of neural networks in a memory-adaptive training (MAT) process. Furthermore, PVT-related voltage margins are eliminated using bit-cells from synaptic weights as in-situ canaries to track runtime environmental variation. Demonstrated on a low-power DNN accelerator fabricated in 65 nm CMOS, MATIC enables up to 3.3\times energy reduction versus the nominal voltage, or 18.6\times application error reduction. We also perform a simulation study that extends MAT to Conv-DNNs, and characterize the accuracy impact of bit failure statistics. Finally, we develop a weight refinement algorithm to improve the performance of MAT, and show that it improves absolute accuracy by 0.8-1.3% or reduces training time by 5- 10\times . |
---|---|
ISSN: | 1549-8328 1558-0806 |
DOI: | 10.1109/TCSI.2018.2839613 |