Bayesian Optimized 1-Bit CNNs
Deep convolutional neural networks (DCNNs) have dominated the recent developments in computer vision through making various record-breaking models. However, it is still a great challenge to achieve powerful DCNNs in resource-limited environments, such as on embedded devices and smart phones. Researc...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep convolutional neural networks (DCNNs) have dominated the recent
developments in computer vision through making various record-breaking models.
However, it is still a great challenge to achieve powerful DCNNs in
resource-limited environments, such as on embedded devices and smart phones.
Researchers have realized that 1-bit CNNs can be one feasible solution to
resolve the issue; however, they are baffled by the inferior performance
compared to the full-precision DCNNs. In this paper, we propose a novel
approach, called Bayesian optimized 1-bit CNNs (denoted as BONNs), taking the
advantage of Bayesian learning, a well-established strategy for hard problems,
to significantly improve the performance of extreme 1-bit CNNs. We incorporate
the prior distributions of full-precision kernels and features into the
Bayesian framework to construct 1-bit CNNs in an end-to-end manner, which have
not been considered in any previous related methods. The Bayesian losses are
achieved with a theoretical support to optimize the network simultaneously in
both continuous and discrete spaces, aggregating different losses jointly to
improve the model capacity. Extensive experiments on the ImageNet and CIFAR
datasets show that BONNs achieve the best classification performance compared
to state-of-the-art 1-bit CNNs. |
---|---|
DOI: | 10.48550/arxiv.1908.06314 |