Efficient Mitchell's Approximate Log Multipliers for Convolutional Neural Networks

This paper proposes energy-efficient approximate multipliers based on the Mitchell’s log multiplication, optimized for performing inferences on convolutional neural networks (CNN). Various design techniques are applied to the log multiplier, including a fully-parallel LOD, efficient shift amount cal...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on computers 2019-05, Vol.68 (5), p.660-675
Hauptverfasser: Kim, Min Soo, Barrio, Alberto A. Del, Oliveira, Leonardo Tavares, Hermida, Roman, Bagherzadeh, Nader
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper proposes energy-efficient approximate multipliers based on the Mitchell’s log multiplication, optimized for performing inferences on convolutional neural networks (CNN). Various design techniques are applied to the log multiplier, including a fully-parallel LOD, efficient shift amount calculation, and exact zero computation. Additionally, the truncation of the operands is studied to create the customizable log multiplier that further reduces energy consumption. The paper also proposes using the one’s complements to handle negative numbers, as an approximation of the two’s complements that had been used in the prior works. The viability of the proposed designs is supported by the detailed formal analysis as well as the experimental results on CNNs. The experiments also provide insights into the effect of approximate multiplication in CNNs, identifying the importance of minimizing the range of error.The proposed customizable design at ww = 8 saves up to 88 percent energy compared to the exact fixed-point multiplier at 32 bits with just a performance degradation of 0.2 percent for the ImageNet ILSVRC2012 dataset.
ISSN:0018-9340
1557-9956
DOI:10.1109/TC.2018.2880742