H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

Although spiking neural networks (SNNs) take benefits from the bioplausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on computer-aided design of integrated circuits and systems 2022-11, Vol.41 (11), p.4782-4796
Hauptverfasser: Liang, Ling, Qu, Zheng, Chen, Zhaodong, Tu, Fengbin, Wu, Yujie, Deng, Lei, Li, Guoqi, Li, Peng, Xie, Yuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4796
container_issue 11
container_start_page 4782
container_title IEEE transactions on computer-aided design of integrated circuits and systems
container_volume 41
creator Liang, Ling
Qu, Zheng
Chen, Zhaodong
Tu, Fengbin
Wu, Yujie
Deng, Lei
Li, Guoqi
Li, Peng
Xie, Yuan
description Although spiking neural networks (SNNs) take benefits from the bioplausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs, and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning, which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and weight update, we first design look-up table (LUT)-based processing elements in the forward engine and weight update engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware backward engine, which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38\times area saving, 5.74-10.20\times speedup, and 5.25-7.12\times energy saving on several benchmark datasets.
doi_str_mv 10.1109/TCAD.2021.3138347
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9662447</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9662447</ieee_id><sourcerecordid>2728572168</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-6c006341c55e57c903ed6ffa525bc43c1fd98249a44357ba1e281f6f668318243</originalsourceid><addsrcrecordid>eNo9kE9LAzEQxYMoWKsfQLwseN6ayd-Nt1KrFYoerHgMaZrUtHW3Jluk395sKx6Gx8z83gw8hK4BDwCwupuNhg8DggkMKNCKMnmCeqCoLBlwOEU9TGRVYizxObpIaYUxME5UD31MyNSZWN8Xk7D8LMfeBxtcbffFYRzqZTG01m1cNG0TC5_rAObhLpqMvW3DuqNeXO43WdqfJq7TJTrzZpPc1Z_20fvjeDaalNPXp-fRcFpaomhbCouxoAws545LqzB1C-G94YTPLaMW_EJVhCnDGOVybsCRCrzwQlQU8oL20e3x7jY23zuXWr1qdrHOLzWRpOKSQEb7CI6UjU1K0Xm9jeHLxL0GrLv8dJef7vLTf_llz83RE5xz_7wSgrC8_QX_eWpF</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2728572168</pqid></control><display><type>article</type><title>H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Liang, Ling ; Qu, Zheng ; Chen, Zhaodong ; Tu, Fengbin ; Wu, Yujie ; Deng, Lei ; Li, Guoqi ; Li, Peng ; Xie, Yuan</creator><creatorcontrib>Liang, Ling ; Qu, Zheng ; Chen, Zhaodong ; Tu, Fengbin ; Wu, Yujie ; Deng, Lei ; Li, Guoqi ; Li, Peng ; Xie, Yuan</creatorcontrib><description><![CDATA[Although spiking neural networks (SNNs) take benefits from the bioplausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs, and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning, which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and weight update, we first design look-up table (LUT)-based processing elements in the forward engine and weight update engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware backward engine, which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves <inline-formula> <tex-math notation="LaTeX">7.38\times </tex-math></inline-formula> area saving, <inline-formula> <tex-math notation="LaTeX">5.74-10.20\times </tex-math></inline-formula> speedup, and <inline-formula> <tex-math notation="LaTeX">5.25-7.12\times </tex-math></inline-formula> energy saving on several benchmark datasets.]]></description><identifier>ISSN: 0278-0070</identifier><identifier>EISSN: 1937-4151</identifier><identifier>DOI: 10.1109/TCAD.2021.3138347</identifier><identifier>CODEN: ITCSDI</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Back propagation networks ; Biological neural networks ; Convolutional neural networks ; Efficiency ; Engines ; Lookup tables ; Machine learning ; Neural networks ; Neuromorphic device ; Neuromorphics ; Neurons ; Optimization ; Sparsity ; Spiking ; spiking neural network (SNN) ; supervised training ; Training</subject><ispartof>IEEE transactions on computer-aided design of integrated circuits and systems, 2022-11, Vol.41 (11), p.4782-4796</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-6c006341c55e57c903ed6ffa525bc43c1fd98249a44357ba1e281f6f668318243</citedby><cites>FETCH-LOGICAL-c293t-6c006341c55e57c903ed6ffa525bc43c1fd98249a44357ba1e281f6f668318243</cites><orcidid>0000-0003-2228-8829 ; 0000-0001-9601-4586 ; 0000-0003-3548-4589 ; 0000-0001-6574-0649 ; 0000-0002-8994-431X ; 0000-0003-2093-1788 ; 0000-0002-8534-6494 ; 0000-0002-5172-9411</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9662447$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9662447$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liang, Ling</creatorcontrib><creatorcontrib>Qu, Zheng</creatorcontrib><creatorcontrib>Chen, Zhaodong</creatorcontrib><creatorcontrib>Tu, Fengbin</creatorcontrib><creatorcontrib>Wu, Yujie</creatorcontrib><creatorcontrib>Deng, Lei</creatorcontrib><creatorcontrib>Li, Guoqi</creatorcontrib><creatorcontrib>Li, Peng</creatorcontrib><creatorcontrib>Xie, Yuan</creatorcontrib><title>H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks</title><title>IEEE transactions on computer-aided design of integrated circuits and systems</title><addtitle>TCAD</addtitle><description><![CDATA[Although spiking neural networks (SNNs) take benefits from the bioplausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs, and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning, which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and weight update, we first design look-up table (LUT)-based processing elements in the forward engine and weight update engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware backward engine, which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves <inline-formula> <tex-math notation="LaTeX">7.38\times </tex-math></inline-formula> area saving, <inline-formula> <tex-math notation="LaTeX">5.74-10.20\times </tex-math></inline-formula> speedup, and <inline-formula> <tex-math notation="LaTeX">5.25-7.12\times </tex-math></inline-formula> energy saving on several benchmark datasets.]]></description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Back propagation networks</subject><subject>Biological neural networks</subject><subject>Convolutional neural networks</subject><subject>Efficiency</subject><subject>Engines</subject><subject>Lookup tables</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Neuromorphic device</subject><subject>Neuromorphics</subject><subject>Neurons</subject><subject>Optimization</subject><subject>Sparsity</subject><subject>Spiking</subject><subject>spiking neural network (SNN)</subject><subject>supervised training</subject><subject>Training</subject><issn>0278-0070</issn><issn>1937-4151</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE9LAzEQxYMoWKsfQLwseN6ayd-Nt1KrFYoerHgMaZrUtHW3Jluk395sKx6Gx8z83gw8hK4BDwCwupuNhg8DggkMKNCKMnmCeqCoLBlwOEU9TGRVYizxObpIaYUxME5UD31MyNSZWN8Xk7D8LMfeBxtcbffFYRzqZTG01m1cNG0TC5_rAObhLpqMvW3DuqNeXO43WdqfJq7TJTrzZpPc1Z_20fvjeDaalNPXp-fRcFpaomhbCouxoAws545LqzB1C-G94YTPLaMW_EJVhCnDGOVybsCRCrzwQlQU8oL20e3x7jY23zuXWr1qdrHOLzWRpOKSQEb7CI6UjU1K0Xm9jeHLxL0GrLv8dJef7vLTf_llz83RE5xz_7wSgrC8_QX_eWpF</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Liang, Ling</creator><creator>Qu, Zheng</creator><creator>Chen, Zhaodong</creator><creator>Tu, Fengbin</creator><creator>Wu, Yujie</creator><creator>Deng, Lei</creator><creator>Li, Guoqi</creator><creator>Li, Peng</creator><creator>Xie, Yuan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0003-2228-8829</orcidid><orcidid>https://orcid.org/0000-0001-9601-4586</orcidid><orcidid>https://orcid.org/0000-0003-3548-4589</orcidid><orcidid>https://orcid.org/0000-0001-6574-0649</orcidid><orcidid>https://orcid.org/0000-0002-8994-431X</orcidid><orcidid>https://orcid.org/0000-0003-2093-1788</orcidid><orcidid>https://orcid.org/0000-0002-8534-6494</orcidid><orcidid>https://orcid.org/0000-0002-5172-9411</orcidid></search><sort><creationdate>20221101</creationdate><title>H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks</title><author>Liang, Ling ; Qu, Zheng ; Chen, Zhaodong ; Tu, Fengbin ; Wu, Yujie ; Deng, Lei ; Li, Guoqi ; Li, Peng ; Xie, Yuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-6c006341c55e57c903ed6ffa525bc43c1fd98249a44357ba1e281f6f668318243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Back propagation networks</topic><topic>Biological neural networks</topic><topic>Convolutional neural networks</topic><topic>Efficiency</topic><topic>Engines</topic><topic>Lookup tables</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Neuromorphic device</topic><topic>Neuromorphics</topic><topic>Neurons</topic><topic>Optimization</topic><topic>Sparsity</topic><topic>Spiking</topic><topic>spiking neural network (SNN)</topic><topic>supervised training</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liang, Ling</creatorcontrib><creatorcontrib>Qu, Zheng</creatorcontrib><creatorcontrib>Chen, Zhaodong</creatorcontrib><creatorcontrib>Tu, Fengbin</creatorcontrib><creatorcontrib>Wu, Yujie</creatorcontrib><creatorcontrib>Deng, Lei</creatorcontrib><creatorcontrib>Li, Guoqi</creatorcontrib><creatorcontrib>Li, Peng</creatorcontrib><creatorcontrib>Xie, Yuan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on computer-aided design of integrated circuits and systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liang, Ling</au><au>Qu, Zheng</au><au>Chen, Zhaodong</au><au>Tu, Fengbin</au><au>Wu, Yujie</au><au>Deng, Lei</au><au>Li, Guoqi</au><au>Li, Peng</au><au>Xie, Yuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks</atitle><jtitle>IEEE transactions on computer-aided design of integrated circuits and systems</jtitle><stitle>TCAD</stitle><date>2022-11-01</date><risdate>2022</risdate><volume>41</volume><issue>11</issue><spage>4782</spage><epage>4796</epage><pages>4782-4796</pages><issn>0278-0070</issn><eissn>1937-4151</eissn><coden>ITCSDI</coden><abstract><![CDATA[Although spiking neural networks (SNNs) take benefits from the bioplausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs, and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning, which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and weight update, we first design look-up table (LUT)-based processing elements in the forward engine and weight update engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware backward engine, which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves <inline-formula> <tex-math notation="LaTeX">7.38\times </tex-math></inline-formula> area saving, <inline-formula> <tex-math notation="LaTeX">5.74-10.20\times </tex-math></inline-formula> speedup, and <inline-formula> <tex-math notation="LaTeX">5.25-7.12\times </tex-math></inline-formula> energy saving on several benchmark datasets.]]></abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCAD.2021.3138347</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0003-2228-8829</orcidid><orcidid>https://orcid.org/0000-0001-9601-4586</orcidid><orcidid>https://orcid.org/0000-0003-3548-4589</orcidid><orcidid>https://orcid.org/0000-0001-6574-0649</orcidid><orcidid>https://orcid.org/0000-0002-8994-431X</orcidid><orcidid>https://orcid.org/0000-0003-2093-1788</orcidid><orcidid>https://orcid.org/0000-0002-8534-6494</orcidid><orcidid>https://orcid.org/0000-0002-5172-9411</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0278-0070
ispartof IEEE transactions on computer-aided design of integrated circuits and systems, 2022-11, Vol.41 (11), p.4782-4796
issn 0278-0070
1937-4151
language eng
recordid cdi_ieee_primary_9662447
source IEEE Electronic Library (IEL)
subjects Accuracy
Algorithms
Artificial neural networks
Back propagation networks
Biological neural networks
Convolutional neural networks
Efficiency
Engines
Lookup tables
Machine learning
Neural networks
Neuromorphic device
Neuromorphics
Neurons
Optimization
Sparsity
Spiking
spiking neural network (SNN)
supervised training
Training
title H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T23%3A14%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=H2Learn:%20High-Efficiency%20Learning%20Accelerator%20for%20High-Accuracy%20Spiking%20Neural%20Networks&rft.jtitle=IEEE%20transactions%20on%20computer-aided%20design%20of%20integrated%20circuits%20and%20systems&rft.au=Liang,%20Ling&rft.date=2022-11-01&rft.volume=41&rft.issue=11&rft.spage=4782&rft.epage=4796&rft.pages=4782-4796&rft.issn=0278-0070&rft.eissn=1937-4151&rft.coden=ITCSDI&rft_id=info:doi/10.1109/TCAD.2021.3138347&rft_dat=%3Cproquest_RIE%3E2728572168%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2728572168&rft_id=info:pmid/&rft_ieee_id=9662447&rfr_iscdi=true