Bayesian Optimized 1-Bit CNNs

Deep convolutional neural networks (DCNNs) have dominated the recent developments in computer vision through making various record-breaking models. However, it is still a great challenge to achieve powerful DCNNs in resource-limited environments, such as on embedded devices and smart phones. Researc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gu, Jiaxin, Zhao, Junhe, Jiang, Xiaolong, Zhang, Baochang, Liu, Jianzhuang, Guo, Guodong, Ji, Rongrong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gu, Jiaxin
Zhao, Junhe
Jiang, Xiaolong
Zhang, Baochang
Liu, Jianzhuang
Guo, Guodong
Ji, Rongrong
description Deep convolutional neural networks (DCNNs) have dominated the recent developments in computer vision through making various record-breaking models. However, it is still a great challenge to achieve powerful DCNNs in resource-limited environments, such as on embedded devices and smart phones. Researchers have realized that 1-bit CNNs can be one feasible solution to resolve the issue; however, they are baffled by the inferior performance compared to the full-precision DCNNs. In this paper, we propose a novel approach, called Bayesian optimized 1-bit CNNs (denoted as BONNs), taking the advantage of Bayesian learning, a well-established strategy for hard problems, to significantly improve the performance of extreme 1-bit CNNs. We incorporate the prior distributions of full-precision kernels and features into the Bayesian framework to construct 1-bit CNNs in an end-to-end manner, which have not been considered in any previous related methods. The Bayesian losses are achieved with a theoretical support to optimize the network simultaneously in both continuous and discrete spaces, aggregating different losses jointly to improve the model capacity. Extensive experiments on the ImageNet and CIFAR datasets show that BONNs achieve the best classification performance compared to state-of-the-art 1-bit CNNs.
doi_str_mv 10.48550/arxiv.1908.06314
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1908_06314</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1908_06314</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-4183c367457876fa38fd3c280eb86a54b73224094edfde4b3902aa9d7a12e6e63</originalsourceid><addsrcrecordid>eNotzrsKwjAYhuEsDlK9AAexN9Ca5M9x1OIJSl3cy98mgYCKtCLq1Xucvnf6eAiZMJoLIyWdY3ePt5xZanKqgIkhmS7x4fuI53R_ucZTfHqXsmwZr2lRVf2IDAIeez_-b0IO69Wh2GblfrMrFmWGSotMMAMtvEtqo1VAMMFByw31jVEoRaOBc0Gt8C44LxqwlCNap5Fxr7yChMx-t19ffeniCbtH_XHWXye8ABWHNWI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Bayesian Optimized 1-Bit CNNs</title><source>arXiv.org</source><creator>Gu, Jiaxin ; Zhao, Junhe ; Jiang, Xiaolong ; Zhang, Baochang ; Liu, Jianzhuang ; Guo, Guodong ; Ji, Rongrong</creator><creatorcontrib>Gu, Jiaxin ; Zhao, Junhe ; Jiang, Xiaolong ; Zhang, Baochang ; Liu, Jianzhuang ; Guo, Guodong ; Ji, Rongrong</creatorcontrib><description>Deep convolutional neural networks (DCNNs) have dominated the recent developments in computer vision through making various record-breaking models. However, it is still a great challenge to achieve powerful DCNNs in resource-limited environments, such as on embedded devices and smart phones. Researchers have realized that 1-bit CNNs can be one feasible solution to resolve the issue; however, they are baffled by the inferior performance compared to the full-precision DCNNs. In this paper, we propose a novel approach, called Bayesian optimized 1-bit CNNs (denoted as BONNs), taking the advantage of Bayesian learning, a well-established strategy for hard problems, to significantly improve the performance of extreme 1-bit CNNs. We incorporate the prior distributions of full-precision kernels and features into the Bayesian framework to construct 1-bit CNNs in an end-to-end manner, which have not been considered in any previous related methods. The Bayesian losses are achieved with a theoretical support to optimize the network simultaneously in both continuous and discrete spaces, aggregating different losses jointly to improve the model capacity. Extensive experiments on the ImageNet and CIFAR datasets show that BONNs achieve the best classification performance compared to state-of-the-art 1-bit CNNs.</description><identifier>DOI: 10.48550/arxiv.1908.06314</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2019-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1908.06314$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1908.06314$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gu, Jiaxin</creatorcontrib><creatorcontrib>Zhao, Junhe</creatorcontrib><creatorcontrib>Jiang, Xiaolong</creatorcontrib><creatorcontrib>Zhang, Baochang</creatorcontrib><creatorcontrib>Liu, Jianzhuang</creatorcontrib><creatorcontrib>Guo, Guodong</creatorcontrib><creatorcontrib>Ji, Rongrong</creatorcontrib><title>Bayesian Optimized 1-Bit CNNs</title><description>Deep convolutional neural networks (DCNNs) have dominated the recent developments in computer vision through making various record-breaking models. However, it is still a great challenge to achieve powerful DCNNs in resource-limited environments, such as on embedded devices and smart phones. Researchers have realized that 1-bit CNNs can be one feasible solution to resolve the issue; however, they are baffled by the inferior performance compared to the full-precision DCNNs. In this paper, we propose a novel approach, called Bayesian optimized 1-bit CNNs (denoted as BONNs), taking the advantage of Bayesian learning, a well-established strategy for hard problems, to significantly improve the performance of extreme 1-bit CNNs. We incorporate the prior distributions of full-precision kernels and features into the Bayesian framework to construct 1-bit CNNs in an end-to-end manner, which have not been considered in any previous related methods. The Bayesian losses are achieved with a theoretical support to optimize the network simultaneously in both continuous and discrete spaces, aggregating different losses jointly to improve the model capacity. Extensive experiments on the ImageNet and CIFAR datasets show that BONNs achieve the best classification performance compared to state-of-the-art 1-bit CNNs.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsKwjAYhuEsDlK9AAexN9Ca5M9x1OIJSl3cy98mgYCKtCLq1Xucvnf6eAiZMJoLIyWdY3ePt5xZanKqgIkhmS7x4fuI53R_ucZTfHqXsmwZr2lRVf2IDAIeez_-b0IO69Wh2GblfrMrFmWGSotMMAMtvEtqo1VAMMFByw31jVEoRaOBc0Gt8C44LxqwlCNap5Fxr7yChMx-t19ffeniCbtH_XHWXye8ABWHNWI</recordid><startdate>20190817</startdate><enddate>20190817</enddate><creator>Gu, Jiaxin</creator><creator>Zhao, Junhe</creator><creator>Jiang, Xiaolong</creator><creator>Zhang, Baochang</creator><creator>Liu, Jianzhuang</creator><creator>Guo, Guodong</creator><creator>Ji, Rongrong</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20190817</creationdate><title>Bayesian Optimized 1-Bit CNNs</title><author>Gu, Jiaxin ; Zhao, Junhe ; Jiang, Xiaolong ; Zhang, Baochang ; Liu, Jianzhuang ; Guo, Guodong ; Ji, Rongrong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-4183c367457876fa38fd3c280eb86a54b73224094edfde4b3902aa9d7a12e6e63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Gu, Jiaxin</creatorcontrib><creatorcontrib>Zhao, Junhe</creatorcontrib><creatorcontrib>Jiang, Xiaolong</creatorcontrib><creatorcontrib>Zhang, Baochang</creatorcontrib><creatorcontrib>Liu, Jianzhuang</creatorcontrib><creatorcontrib>Guo, Guodong</creatorcontrib><creatorcontrib>Ji, Rongrong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gu, Jiaxin</au><au>Zhao, Junhe</au><au>Jiang, Xiaolong</au><au>Zhang, Baochang</au><au>Liu, Jianzhuang</au><au>Guo, Guodong</au><au>Ji, Rongrong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bayesian Optimized 1-Bit CNNs</atitle><date>2019-08-17</date><risdate>2019</risdate><abstract>Deep convolutional neural networks (DCNNs) have dominated the recent developments in computer vision through making various record-breaking models. However, it is still a great challenge to achieve powerful DCNNs in resource-limited environments, such as on embedded devices and smart phones. Researchers have realized that 1-bit CNNs can be one feasible solution to resolve the issue; however, they are baffled by the inferior performance compared to the full-precision DCNNs. In this paper, we propose a novel approach, called Bayesian optimized 1-bit CNNs (denoted as BONNs), taking the advantage of Bayesian learning, a well-established strategy for hard problems, to significantly improve the performance of extreme 1-bit CNNs. We incorporate the prior distributions of full-precision kernels and features into the Bayesian framework to construct 1-bit CNNs in an end-to-end manner, which have not been considered in any previous related methods. The Bayesian losses are achieved with a theoretical support to optimize the network simultaneously in both continuous and discrete spaces, aggregating different losses jointly to improve the model capacity. Extensive experiments on the ImageNet and CIFAR datasets show that BONNs achieve the best classification performance compared to state-of-the-art 1-bit CNNs.</abstract><doi>10.48550/arxiv.1908.06314</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1908.06314
ispartof
issn
language eng
recordid cdi_arxiv_primary_1908_06314
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
title Bayesian Optimized 1-Bit CNNs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T17%3A05%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bayesian%20Optimized%201-Bit%20CNNs&rft.au=Gu,%20Jiaxin&rft.date=2019-08-17&rft_id=info:doi/10.48550/arxiv.1908.06314&rft_dat=%3Carxiv_GOX%3E1908_06314%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true