[Formula Omitted]NN: Power-Efficient Neural Network Acceleration Using Differential Weights
The enormous and ever-increasing complexity of state-of-the-art neural networks has impeded the deployment of deep learning on resource-limited embedded and mobile devices. To reduce the complexity of neural networks, this article presents $\Delta$ΔNN, a power-efficient architecture that leverages a...
Gespeichert in:
Veröffentlicht in: | IEEE MICRO 2020-01, Vol.40 (1), p.67 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The enormous and ever-increasing complexity of state-of-the-art neural networks has impeded the deployment of deep learning on resource-limited embedded and mobile devices. To reduce the complexity of neural networks, this article presents $\Delta$ΔNN, a power-efficient architecture that leverages a combination of the approximate value locality of neuron weights and algorithmic structure of neural networks. $\Delta$ΔNN keeps each weight as its difference ($\Delta$Δ) to the nearest smaller weight: each weight reuses the calculations of the smaller weight, followed by a calculation on the $\Delta$Δ value to make up the difference. We also round up/down the $\Delta$Δ to the closest power of two numbers to further reduce complexity. The experimental results show that $\Delta$ΔNN boosts the average performance by 14%–37% and reduces the average power consumption by 17%–49% over some state-of-the-art neural network designs. |
---|---|
ISSN: | 0272-1732 1937-4143 |
DOI: | 10.1109/MM.2019.2948345 |