Mosaic-CNN: A Combined Two-Step Zero Prediction Approach to Trade off Accuracy and Computation Energy in Convolutional Neural Networks
In convolutional neural networks (CNNs), convolutional layers consume dominant portion of computation energy due to large amount of multiply-accumulate operations (MACs). However, those MACs become meaningless (zeroes) after rectified linear unit when the convolution results become negative. In this...
Gespeichert in:
Veröffentlicht in: | IEEE journal on emerging and selected topics in circuits and systems 2018-12, Vol.8 (4), p.770-781 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 781 |
---|---|
container_issue | 4 |
container_start_page | 770 |
container_title | IEEE journal on emerging and selected topics in circuits and systems |
container_volume | 8 |
creator | Kim, Cheolhwan Shin, Dongyeob Kim, Bohun Park, Jongsun |
description | In convolutional neural networks (CNNs), convolutional layers consume dominant portion of computation energy due to large amount of multiply-accumulate operations (MACs). However, those MACs become meaningless (zeroes) after rectified linear unit when the convolution results become negative. In this paper, we present an efficient approach to predict and skip the convolutions generating zero outputs. The proposed two-step zero prediction approach, called mosaic CNN, can be effectively used for trading off classification accuracy for computation energy in CNN. In the mosaic CNN, the outputs of each convolutional layer are computed considering their spatial surroundings in an output feature map. Here, the types of spatial surroundings (mosaic types) can be selected to save computation energy at the expense of accuracy. In order to further save the computations, we also propose a most significant bits (MSBs) only computation scheme, where a constant value representing least significant bits compensates the MSBs only computations. The CNN accelerator supporting the combined two approaches has been implemented using the 65-nm CMOS process. The numerical results show that compared with the state-of-art processor, the proposed reconfigurable accelerator can achieve energy savings ranging from 16.99% to 29.64% for VGG-16 without seriously compromising the classification accuracy. |
doi_str_mv | 10.1109/JETCAS.2018.2865006 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JETCAS_2018_2865006</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8434203</ieee_id><sourcerecordid>2159385062</sourcerecordid><originalsourceid>FETCH-LOGICAL-c297t-cdb5e22f40e52047bcf3a0320bb3182ed7a55b1cc0b0684db93cafe9c74e6d753</originalsourceid><addsrcrecordid>eNo9kMtOwzAQRSMEElXpF3RjiXWKH7GTsIui8lIpSC0bNpHjTCCljYOdUPUH-G4cUnU2M7q6dzRzPG9K8IwQHN88zddpsppRTKIZjQTHWJx5I0q48BkT_Pw08_DSm1i7wa64ICIIRt7vs7ayUn66XN6iBKV6l1c1FGi91_6qhQa9g9Ho1UBRqbbSNUqaxmipPlGr0drIApAuS5Qo1RmpDkjWRb-k6Vr5b5_XYD4OqKqdWv_obdercouW4Px9a_fafNkr76KUWwuTYx97b3furQd_8XL_mCYLX9E4bH1V5BwoLQMMnOIgzFXJJGYU5zkjEYUilJznRCmcYxEFRR4zJUuIVRiAKELOxt71sNc98d2BbbON7ow7yGYOUswijgV1Lja4lNHWGiizxlQ7aQ4ZwVnPPBuYZz3z7MjcpaZDqgKAUyIKWEAxY388mH6M</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2159385062</pqid></control><display><type>article</type><title>Mosaic-CNN: A Combined Two-Step Zero Prediction Approach to Trade off Accuracy and Computation Energy in Convolutional Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Kim, Cheolhwan ; Shin, Dongyeob ; Kim, Bohun ; Park, Jongsun</creator><creatorcontrib>Kim, Cheolhwan ; Shin, Dongyeob ; Kim, Bohun ; Park, Jongsun</creatorcontrib><description>In convolutional neural networks (CNNs), convolutional layers consume dominant portion of computation energy due to large amount of multiply-accumulate operations (MACs). However, those MACs become meaningless (zeroes) after rectified linear unit when the convolution results become negative. In this paper, we present an efficient approach to predict and skip the convolutions generating zero outputs. The proposed two-step zero prediction approach, called mosaic CNN, can be effectively used for trading off classification accuracy for computation energy in CNN. In the mosaic CNN, the outputs of each convolutional layer are computed considering their spatial surroundings in an output feature map. Here, the types of spatial surroundings (mosaic types) can be selected to save computation energy at the expense of accuracy. In order to further save the computations, we also propose a most significant bits (MSBs) only computation scheme, where a constant value representing least significant bits compensates the MSBs only computations. The CNN accelerator supporting the combined two approaches has been implemented using the 65-nm CMOS process. The numerical results show that compared with the state-of-art processor, the proposed reconfigurable accelerator can achieve energy savings ranging from 16.99% to 29.64% for VGG-16 without seriously compromising the classification accuracy.</description><identifier>ISSN: 2156-3357</identifier><identifier>EISSN: 2156-3365</identifier><identifier>DOI: 10.1109/JETCAS.2018.2865006</identifier><identifier>CODEN: IJESLY</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Accuracy ; Artificial neural networks ; Circuits and systems ; Classification ; CMOS ; Computation ; Computer architecture ; Convolution ; Convolutional neural networks ; Energy conservation ; Energy efficiency ; energy-efficient accelerator ; Feature maps ; Microprocessors ; Neural networks ; Simulation</subject><ispartof>IEEE journal on emerging and selected topics in circuits and systems, 2018-12, Vol.8 (4), p.770-781</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c297t-cdb5e22f40e52047bcf3a0320bb3182ed7a55b1cc0b0684db93cafe9c74e6d753</citedby><cites>FETCH-LOGICAL-c297t-cdb5e22f40e52047bcf3a0320bb3182ed7a55b1cc0b0684db93cafe9c74e6d753</cites><orcidid>0000-0003-3251-0024</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8434203$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8434203$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kim, Cheolhwan</creatorcontrib><creatorcontrib>Shin, Dongyeob</creatorcontrib><creatorcontrib>Kim, Bohun</creatorcontrib><creatorcontrib>Park, Jongsun</creatorcontrib><title>Mosaic-CNN: A Combined Two-Step Zero Prediction Approach to Trade off Accuracy and Computation Energy in Convolutional Neural Networks</title><title>IEEE journal on emerging and selected topics in circuits and systems</title><addtitle>JETCAS</addtitle><description>In convolutional neural networks (CNNs), convolutional layers consume dominant portion of computation energy due to large amount of multiply-accumulate operations (MACs). However, those MACs become meaningless (zeroes) after rectified linear unit when the convolution results become negative. In this paper, we present an efficient approach to predict and skip the convolutions generating zero outputs. The proposed two-step zero prediction approach, called mosaic CNN, can be effectively used for trading off classification accuracy for computation energy in CNN. In the mosaic CNN, the outputs of each convolutional layer are computed considering their spatial surroundings in an output feature map. Here, the types of spatial surroundings (mosaic types) can be selected to save computation energy at the expense of accuracy. In order to further save the computations, we also propose a most significant bits (MSBs) only computation scheme, where a constant value representing least significant bits compensates the MSBs only computations. The CNN accelerator supporting the combined two approaches has been implemented using the 65-nm CMOS process. The numerical results show that compared with the state-of-art processor, the proposed reconfigurable accelerator can achieve energy savings ranging from 16.99% to 29.64% for VGG-16 without seriously compromising the classification accuracy.</description><subject>Accuracy</subject><subject>Artificial neural networks</subject><subject>Circuits and systems</subject><subject>Classification</subject><subject>CMOS</subject><subject>Computation</subject><subject>Computer architecture</subject><subject>Convolution</subject><subject>Convolutional neural networks</subject><subject>Energy conservation</subject><subject>Energy efficiency</subject><subject>energy-efficient accelerator</subject><subject>Feature maps</subject><subject>Microprocessors</subject><subject>Neural networks</subject><subject>Simulation</subject><issn>2156-3357</issn><issn>2156-3365</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kMtOwzAQRSMEElXpF3RjiXWKH7GTsIui8lIpSC0bNpHjTCCljYOdUPUH-G4cUnU2M7q6dzRzPG9K8IwQHN88zddpsppRTKIZjQTHWJx5I0q48BkT_Pw08_DSm1i7wa64ICIIRt7vs7ayUn66XN6iBKV6l1c1FGi91_6qhQa9g9Ho1UBRqbbSNUqaxmipPlGr0drIApAuS5Qo1RmpDkjWRb-k6Vr5b5_XYD4OqKqdWv_obdercouW4Px9a_fafNkr76KUWwuTYx97b3furQd_8XL_mCYLX9E4bH1V5BwoLQMMnOIgzFXJJGYU5zkjEYUilJznRCmcYxEFRR4zJUuIVRiAKELOxt71sNc98d2BbbON7ow7yGYOUswijgV1Lja4lNHWGiizxlQ7aQ4ZwVnPPBuYZz3z7MjcpaZDqgKAUyIKWEAxY388mH6M</recordid><startdate>20181201</startdate><enddate>20181201</enddate><creator>Kim, Cheolhwan</creator><creator>Shin, Dongyeob</creator><creator>Kim, Bohun</creator><creator>Park, Jongsun</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-3251-0024</orcidid></search><sort><creationdate>20181201</creationdate><title>Mosaic-CNN: A Combined Two-Step Zero Prediction Approach to Trade off Accuracy and Computation Energy in Convolutional Neural Networks</title><author>Kim, Cheolhwan ; Shin, Dongyeob ; Kim, Bohun ; Park, Jongsun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c297t-cdb5e22f40e52047bcf3a0320bb3182ed7a55b1cc0b0684db93cafe9c74e6d753</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Accuracy</topic><topic>Artificial neural networks</topic><topic>Circuits and systems</topic><topic>Classification</topic><topic>CMOS</topic><topic>Computation</topic><topic>Computer architecture</topic><topic>Convolution</topic><topic>Convolutional neural networks</topic><topic>Energy conservation</topic><topic>Energy efficiency</topic><topic>energy-efficient accelerator</topic><topic>Feature maps</topic><topic>Microprocessors</topic><topic>Neural networks</topic><topic>Simulation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Cheolhwan</creatorcontrib><creatorcontrib>Shin, Dongyeob</creatorcontrib><creatorcontrib>Kim, Bohun</creatorcontrib><creatorcontrib>Park, Jongsun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE journal on emerging and selected topics in circuits and systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Cheolhwan</au><au>Shin, Dongyeob</au><au>Kim, Bohun</au><au>Park, Jongsun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mosaic-CNN: A Combined Two-Step Zero Prediction Approach to Trade off Accuracy and Computation Energy in Convolutional Neural Networks</atitle><jtitle>IEEE journal on emerging and selected topics in circuits and systems</jtitle><stitle>JETCAS</stitle><date>2018-12-01</date><risdate>2018</risdate><volume>8</volume><issue>4</issue><spage>770</spage><epage>781</epage><pages>770-781</pages><issn>2156-3357</issn><eissn>2156-3365</eissn><coden>IJESLY</coden><abstract>In convolutional neural networks (CNNs), convolutional layers consume dominant portion of computation energy due to large amount of multiply-accumulate operations (MACs). However, those MACs become meaningless (zeroes) after rectified linear unit when the convolution results become negative. In this paper, we present an efficient approach to predict and skip the convolutions generating zero outputs. The proposed two-step zero prediction approach, called mosaic CNN, can be effectively used for trading off classification accuracy for computation energy in CNN. In the mosaic CNN, the outputs of each convolutional layer are computed considering their spatial surroundings in an output feature map. Here, the types of spatial surroundings (mosaic types) can be selected to save computation energy at the expense of accuracy. In order to further save the computations, we also propose a most significant bits (MSBs) only computation scheme, where a constant value representing least significant bits compensates the MSBs only computations. The CNN accelerator supporting the combined two approaches has been implemented using the 65-nm CMOS process. The numerical results show that compared with the state-of-art processor, the proposed reconfigurable accelerator can achieve energy savings ranging from 16.99% to 29.64% for VGG-16 without seriously compromising the classification accuracy.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/JETCAS.2018.2865006</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-3251-0024</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 2156-3357 |
ispartof | IEEE journal on emerging and selected topics in circuits and systems, 2018-12, Vol.8 (4), p.770-781 |
issn | 2156-3357 2156-3365 |
language | eng |
recordid | cdi_crossref_primary_10_1109_JETCAS_2018_2865006 |
source | IEEE Electronic Library (IEL) |
subjects | Accuracy Artificial neural networks Circuits and systems Classification CMOS Computation Computer architecture Convolution Convolutional neural networks Energy conservation Energy efficiency energy-efficient accelerator Feature maps Microprocessors Neural networks Simulation |
title | Mosaic-CNN: A Combined Two-Step Zero Prediction Approach to Trade off Accuracy and Computation Energy in Convolutional Neural Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T05%3A55%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mosaic-CNN:%20A%20Combined%20Two-Step%20Zero%20Prediction%20Approach%20to%20Trade%20off%20Accuracy%20and%20Computation%20Energy%20in%20Convolutional%20Neural%20Networks&rft.jtitle=IEEE%20journal%20on%20emerging%20and%20selected%20topics%20in%20circuits%20and%20systems&rft.au=Kim,%20Cheolhwan&rft.date=2018-12-01&rft.volume=8&rft.issue=4&rft.spage=770&rft.epage=781&rft.pages=770-781&rft.issn=2156-3357&rft.eissn=2156-3365&rft.coden=IJESLY&rft_id=info:doi/10.1109/JETCAS.2018.2865006&rft_dat=%3Cproquest_RIE%3E2159385062%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2159385062&rft_id=info:pmid/&rft_ieee_id=8434203&rfr_iscdi=true |