Binaryware: A High-Performance Digital Hardware Accelerator for Binary Neural Networks
Binary neural networks (BNNs) largely reduce the memory footprint and computational complexity, so they are gaining interests on various mobile applications. In the BNNs, the first layer often accounts for the largest part of the entire computing time because the layer usually uses multi-bit multipl...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on very large scale integration (VLSI) systems 2023-12, Vol.31 (12), p.2137-2141 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2141 |
---|---|
container_issue | 12 |
container_start_page | 2137 |
container_title | IEEE transactions on very large scale integration (VLSI) systems |
container_volume | 31 |
creator | Ryu, Sungju Oh, Youngtaek Kim, Jae-Joon |
description | Binary neural networks (BNNs) largely reduce the memory footprint and computational complexity, so they are gaining interests on various mobile applications. In the BNNs, the first layer often accounts for the largest part of the entire computing time because the layer usually uses multi-bit multiplications. However, traditional hardware designed for BNN computing focuses primarily on the rest layers, resulting in significant performance degradation. In this brief, we introduce Binaryware architecture which achieves the high-performance computation on both the first and rest layers. Experimental results show that our Binaryware improves the throughput per compute area by 1.5- 13.3\times on various BNN workloads. |
doi_str_mv | 10.1109/TVLSI.2023.3324834 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10304103</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10304103</ieee_id><sourcerecordid>2895011434</sourcerecordid><originalsourceid>FETCH-LOGICAL-c247t-700038869362d81d6335f6e38b68ef4924119fca6de9f985e30877d7b169ae743</originalsourceid><addsrcrecordid>eNpNkE1PwzAMhiMEEmPwBxCHSJxbkjhNE25jfGzSNJAYu0ZZ646ObR1pp4l_T0p3IAfHh-e1rYeQa85izpm5m80n7-NYMAExgJAa5Anp8SRJIxPeaeiZgkgLzs7JRV2vGONSGtYj84dy6_zPwXm8pwM6Kpef0Rv6ovIbt82QPpbLsnFrOnI-byE6yDJco3dN5WmgaJenU9z7gE2xOVT-q74kZ4Vb13h1_Pvk4_lpNhxFk9eX8XAwiTIh0yZKGWOgtTKgRK55rgCSQiHohdJYSCMk56bInMrRFEYnCEynaZ4uuDIOUwl9ctvN3fnqe491Y1fV3m_DSiu0SRjnElpKdFTmq7r2WNidLzfhbMuZbf3ZP3-29WeP_kLopguViPgvAEyGAr9Lr2sT</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2895011434</pqid></control><display><type>article</type><title>Binaryware: A High-Performance Digital Hardware Accelerator for Binary Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Ryu, Sungju ; Oh, Youngtaek ; Kim, Jae-Joon</creator><creatorcontrib>Ryu, Sungju ; Oh, Youngtaek ; Kim, Jae-Joon</creatorcontrib><description>Binary neural networks (BNNs) largely reduce the memory footprint and computational complexity, so they are gaining interests on various mobile applications. In the BNNs, the first layer often accounts for the largest part of the entire computing time because the layer usually uses multi-bit multiplications. However, traditional hardware designed for BNN computing focuses primarily on the rest layers, resulting in significant performance degradation. In this brief, we introduce Binaryware architecture which achieves the high-performance computation on both the first and rest layers. Experimental results show that our Binaryware improves the throughput per compute area by 1.5-<inline-formula> <tex-math notation="LaTeX">13.3\times </tex-math></inline-formula> on various BNN workloads.</description><identifier>ISSN: 1063-8210</identifier><identifier>EISSN: 1557-9999</identifier><identifier>DOI: 10.1109/TVLSI.2023.3324834</identifier><identifier>CODEN: IEVSE9</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adders ; Applications programs ; Artificial intelligence ; binary neural networks (BNNs) ; Computer architecture ; Computing time ; Energy efficiency ; Hardware ; Hardware acceleration ; hardware accelerator ; Logic gates ; Mobile computing ; Neural networks ; Performance degradation ; quantization ; Random access memory</subject><ispartof>IEEE transactions on very large scale integration (VLSI) systems, 2023-12, Vol.31 (12), p.2137-2141</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c247t-700038869362d81d6335f6e38b68ef4924119fca6de9f985e30877d7b169ae743</cites><orcidid>0000-0002-0254-391X ; 0000-0001-5175-8258</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10304103$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10304103$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ryu, Sungju</creatorcontrib><creatorcontrib>Oh, Youngtaek</creatorcontrib><creatorcontrib>Kim, Jae-Joon</creatorcontrib><title>Binaryware: A High-Performance Digital Hardware Accelerator for Binary Neural Networks</title><title>IEEE transactions on very large scale integration (VLSI) systems</title><addtitle>TVLSI</addtitle><description>Binary neural networks (BNNs) largely reduce the memory footprint and computational complexity, so they are gaining interests on various mobile applications. In the BNNs, the first layer often accounts for the largest part of the entire computing time because the layer usually uses multi-bit multiplications. However, traditional hardware designed for BNN computing focuses primarily on the rest layers, resulting in significant performance degradation. In this brief, we introduce Binaryware architecture which achieves the high-performance computation on both the first and rest layers. Experimental results show that our Binaryware improves the throughput per compute area by 1.5-<inline-formula> <tex-math notation="LaTeX">13.3\times </tex-math></inline-formula> on various BNN workloads.</description><subject>Adders</subject><subject>Applications programs</subject><subject>Artificial intelligence</subject><subject>binary neural networks (BNNs)</subject><subject>Computer architecture</subject><subject>Computing time</subject><subject>Energy efficiency</subject><subject>Hardware</subject><subject>Hardware acceleration</subject><subject>hardware accelerator</subject><subject>Logic gates</subject><subject>Mobile computing</subject><subject>Neural networks</subject><subject>Performance degradation</subject><subject>quantization</subject><subject>Random access memory</subject><issn>1063-8210</issn><issn>1557-9999</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1PwzAMhiMEEmPwBxCHSJxbkjhNE25jfGzSNJAYu0ZZ646ObR1pp4l_T0p3IAfHh-e1rYeQa85izpm5m80n7-NYMAExgJAa5Anp8SRJIxPeaeiZgkgLzs7JRV2vGONSGtYj84dy6_zPwXm8pwM6Kpef0Rv6ovIbt82QPpbLsnFrOnI-byE6yDJco3dN5WmgaJenU9z7gE2xOVT-q74kZ4Vb13h1_Pvk4_lpNhxFk9eX8XAwiTIh0yZKGWOgtTKgRK55rgCSQiHohdJYSCMk56bInMrRFEYnCEynaZ4uuDIOUwl9ctvN3fnqe491Y1fV3m_DSiu0SRjnElpKdFTmq7r2WNidLzfhbMuZbf3ZP3-29WeP_kLopguViPgvAEyGAr9Lr2sT</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Ryu, Sungju</creator><creator>Oh, Youngtaek</creator><creator>Kim, Jae-Joon</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-0254-391X</orcidid><orcidid>https://orcid.org/0000-0001-5175-8258</orcidid></search><sort><creationdate>20231201</creationdate><title>Binaryware: A High-Performance Digital Hardware Accelerator for Binary Neural Networks</title><author>Ryu, Sungju ; Oh, Youngtaek ; Kim, Jae-Joon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c247t-700038869362d81d6335f6e38b68ef4924119fca6de9f985e30877d7b169ae743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adders</topic><topic>Applications programs</topic><topic>Artificial intelligence</topic><topic>binary neural networks (BNNs)</topic><topic>Computer architecture</topic><topic>Computing time</topic><topic>Energy efficiency</topic><topic>Hardware</topic><topic>Hardware acceleration</topic><topic>hardware accelerator</topic><topic>Logic gates</topic><topic>Mobile computing</topic><topic>Neural networks</topic><topic>Performance degradation</topic><topic>quantization</topic><topic>Random access memory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ryu, Sungju</creatorcontrib><creatorcontrib>Oh, Youngtaek</creatorcontrib><creatorcontrib>Kim, Jae-Joon</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on very large scale integration (VLSI) systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ryu, Sungju</au><au>Oh, Youngtaek</au><au>Kim, Jae-Joon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Binaryware: A High-Performance Digital Hardware Accelerator for Binary Neural Networks</atitle><jtitle>IEEE transactions on very large scale integration (VLSI) systems</jtitle><stitle>TVLSI</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>31</volume><issue>12</issue><spage>2137</spage><epage>2141</epage><pages>2137-2141</pages><issn>1063-8210</issn><eissn>1557-9999</eissn><coden>IEVSE9</coden><abstract>Binary neural networks (BNNs) largely reduce the memory footprint and computational complexity, so they are gaining interests on various mobile applications. In the BNNs, the first layer often accounts for the largest part of the entire computing time because the layer usually uses multi-bit multiplications. However, traditional hardware designed for BNN computing focuses primarily on the rest layers, resulting in significant performance degradation. In this brief, we introduce Binaryware architecture which achieves the high-performance computation on both the first and rest layers. Experimental results show that our Binaryware improves the throughput per compute area by 1.5-<inline-formula> <tex-math notation="LaTeX">13.3\times </tex-math></inline-formula> on various BNN workloads.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TVLSI.2023.3324834</doi><tpages>5</tpages><orcidid>https://orcid.org/0000-0002-0254-391X</orcidid><orcidid>https://orcid.org/0000-0001-5175-8258</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1063-8210 |
ispartof | IEEE transactions on very large scale integration (VLSI) systems, 2023-12, Vol.31 (12), p.2137-2141 |
issn | 1063-8210 1557-9999 |
language | eng |
recordid | cdi_ieee_primary_10304103 |
source | IEEE Electronic Library (IEL) |
subjects | Adders Applications programs Artificial intelligence binary neural networks (BNNs) Computer architecture Computing time Energy efficiency Hardware Hardware acceleration hardware accelerator Logic gates Mobile computing Neural networks Performance degradation quantization Random access memory |
title | Binaryware: A High-Performance Digital Hardware Accelerator for Binary Neural Networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T13%3A38%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Binaryware:%20A%20High-Performance%20Digital%20Hardware%20Accelerator%20for%20Binary%20Neural%20Networks&rft.jtitle=IEEE%20transactions%20on%20very%20large%20scale%20integration%20(VLSI)%20systems&rft.au=Ryu,%20Sungju&rft.date=2023-12-01&rft.volume=31&rft.issue=12&rft.spage=2137&rft.epage=2141&rft.pages=2137-2141&rft.issn=1063-8210&rft.eissn=1557-9999&rft.coden=IEVSE9&rft_id=info:doi/10.1109/TVLSI.2023.3324834&rft_dat=%3Cproquest_RIE%3E2895011434%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2895011434&rft_id=info:pmid/&rft_ieee_id=10304103&rfr_iscdi=true |