Fixed-Sign Binary Neural Network: An Efficient Design of Neural Network for Internet-of-Things Devices
High computational requirement and rigorous memory cost are the significant issues which limit Convolutional Neural Networks' deployability in resource-constrained environments typically found in edge devices of Internet-of-Things (IoT). To address the problem, binary and ternary networks have...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020, Vol.8, p.164858-164863 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | High computational requirement and rigorous memory cost are the significant issues which limit Convolutional Neural Networks' deployability in resource-constrained environments typically found in edge devices of Internet-of-Things (IoT). To address the problem, binary and ternary networks have been proposed to constrain the weights to reduce computational and memory costs. However, owing to the binary or ternary values, the backward propagations are not as efficient as normal during training, which makes it tough to train in edge devices. In this paper, we find a different way to resolve the problem and propose Fixed-Sign Binary Neural Network (FSB), which decomposes convolution kernel into sign and scaling factor as the prior researches but only trains the scaling factors instead of both. By doing so, our FSB avoids the sign involved in backward propagations and makes models easy to be deployed and trained in the IoT devices. Meanwhile, the convolution-acceleration architecture which we design for our FSB results in a reduced computing burden while achieving the same performance. Thanks to the efficiency of our FSB, even though we randomly initialize the sign and fix it to be untrainable, our FSB still has remarkable performances. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2020.3022902 |