BoolNet: Minimizing The Energy Consumption of Binary Neural Networks
Recent works on Binary Neural Networks (BNNs) have made promising progress in narrowing the accuracy gap of BNNs to their 32-bit counterparts. However, the accuracy gains are often based on specialized model designs using additional 32-bit components. Furthermore, almost all previous BNNs use 32-bit...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent works on Binary Neural Networks (BNNs) have made promising progress in
narrowing the accuracy gap of BNNs to their 32-bit counterparts. However, the
accuracy gains are often based on specialized model designs using additional
32-bit components. Furthermore, almost all previous BNNs use 32-bit for feature
maps and the shortcuts enclosing the corresponding binary convolution blocks,
which helps to effectively maintain the accuracy, but is not friendly to
hardware accelerators with limited memory, energy, and computing resources.
Thus, we raise the following question: How can accuracy and energy consumption
be balanced in a BNN network design? We extensively study this fundamental
problem in this work and propose a novel BNN architecture without most commonly
used 32-bit components: \textit{BoolNet}. Experimental results on ImageNet
demonstrate that BoolNet can achieve 4.6x energy reduction coupled with 1.2\%
higher accuracy than the commonly used BNN architecture Bi-RealNet. Code and
trained models are available at: https://github.com/hpi-xnor/BoolNet. |
---|---|
DOI: | 10.48550/arxiv.2106.06991 |