An Asynchronous Multi-core Accelerator for SNN inference

Spiking Neural Networks (SNNs) are extensively utilized in brain-inspired computing and neuroscience research. To enhance the speed and energy efficiency of SNNs, several many-core accelerators have been developed. However, maintaining the accuracy of SNNs often necessitates frequent explicit synchr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-07
Hauptverfasser: Chen, Zhuo, Ma, De, Jin, Xiaofei, Xing, Qinghui, Jin, Ouwen, Du, Xin, He, Shuibing, Pan, Gang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Chen, Zhuo
Ma, De
Jin, Xiaofei
Xing, Qinghui
Jin, Ouwen
Du, Xin
He, Shuibing
Pan, Gang
description Spiking Neural Networks (SNNs) are extensively utilized in brain-inspired computing and neuroscience research. To enhance the speed and energy efficiency of SNNs, several many-core accelerators have been developed. However, maintaining the accuracy of SNNs often necessitates frequent explicit synchronization among all cores, which presents a challenge to overall efficiency. In this paper, we propose an asynchronous architecture for Spiking Neural Networks (SNNs) that eliminates the need for inter-core synchronization, thus enhancing speed and energy efficiency. This approach leverages the pre-determined dependencies of neuromorphic cores established during compilation. Each core is equipped with a scheduler that monitors the status of its dependencies, allowing it to safely advance to the next timestep without waiting for other cores. This eliminates the necessity for global synchronization and minimizes core waiting time despite inherent workload imbalances. Comprehensive evaluations using five different SNN workloads show that our architecture achieves a 1.86x speedup and a 1.55x increase in energy efficiency compared to state-of-the-art synchronization architectures.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3086454078</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3086454078</sourcerecordid><originalsourceid>FETCH-proquest_journals_30864540783</originalsourceid><addsrcrecordid>eNqNir0KwjAYAIMgWLTvEHAuxPy0WYMoLnbRvZTwhbaERL8kg29vBx_A4bjhbkMqLsSp0ZLzHalTWhhjvO24UqIi2gRq0ifYCWOIJdF78XlubESgxlrwgGOOSN3Ko-_pHBwgBAsHsnWjT1D_vCfH6-V5vjUvjO8CKQ9LLBjWNAimW6kk67T47_oCHJ01vQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3086454078</pqid></control><display><type>article</type><title>An Asynchronous Multi-core Accelerator for SNN inference</title><source>Free E- Journals</source><creator>Chen, Zhuo ; Ma, De ; Jin, Xiaofei ; Xing, Qinghui ; Jin, Ouwen ; Du, Xin ; He, Shuibing ; Pan, Gang</creator><creatorcontrib>Chen, Zhuo ; Ma, De ; Jin, Xiaofei ; Xing, Qinghui ; Jin, Ouwen ; Du, Xin ; He, Shuibing ; Pan, Gang</creatorcontrib><description>Spiking Neural Networks (SNNs) are extensively utilized in brain-inspired computing and neuroscience research. To enhance the speed and energy efficiency of SNNs, several many-core accelerators have been developed. However, maintaining the accuracy of SNNs often necessitates frequent explicit synchronization among all cores, which presents a challenge to overall efficiency. In this paper, we propose an asynchronous architecture for Spiking Neural Networks (SNNs) that eliminates the need for inter-core synchronization, thus enhancing speed and energy efficiency. This approach leverages the pre-determined dependencies of neuromorphic cores established during compilation. Each core is equipped with a scheduler that monitors the status of its dependencies, allowing it to safely advance to the next timestep without waiting for other cores. This eliminates the necessity for global synchronization and minimizes core waiting time despite inherent workload imbalances. Comprehensive evaluations using five different SNN workloads show that our architecture achieves a 1.86x speedup and a 1.55x increase in energy efficiency compared to state-of-the-art synchronization architectures.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer architecture ; Energy efficiency ; Neural networks ; State-of-the-art reviews ; Synchronism ; Time synchronization ; Workload</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Chen, Zhuo</creatorcontrib><creatorcontrib>Ma, De</creatorcontrib><creatorcontrib>Jin, Xiaofei</creatorcontrib><creatorcontrib>Xing, Qinghui</creatorcontrib><creatorcontrib>Jin, Ouwen</creatorcontrib><creatorcontrib>Du, Xin</creatorcontrib><creatorcontrib>He, Shuibing</creatorcontrib><creatorcontrib>Pan, Gang</creatorcontrib><title>An Asynchronous Multi-core Accelerator for SNN inference</title><title>arXiv.org</title><description>Spiking Neural Networks (SNNs) are extensively utilized in brain-inspired computing and neuroscience research. To enhance the speed and energy efficiency of SNNs, several many-core accelerators have been developed. However, maintaining the accuracy of SNNs often necessitates frequent explicit synchronization among all cores, which presents a challenge to overall efficiency. In this paper, we propose an asynchronous architecture for Spiking Neural Networks (SNNs) that eliminates the need for inter-core synchronization, thus enhancing speed and energy efficiency. This approach leverages the pre-determined dependencies of neuromorphic cores established during compilation. Each core is equipped with a scheduler that monitors the status of its dependencies, allowing it to safely advance to the next timestep without waiting for other cores. This eliminates the necessity for global synchronization and minimizes core waiting time despite inherent workload imbalances. Comprehensive evaluations using five different SNN workloads show that our architecture achieves a 1.86x speedup and a 1.55x increase in energy efficiency compared to state-of-the-art synchronization architectures.</description><subject>Computer architecture</subject><subject>Energy efficiency</subject><subject>Neural networks</subject><subject>State-of-the-art reviews</subject><subject>Synchronism</subject><subject>Time synchronization</subject><subject>Workload</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNir0KwjAYAIMgWLTvEHAuxPy0WYMoLnbRvZTwhbaERL8kg29vBx_A4bjhbkMqLsSp0ZLzHalTWhhjvO24UqIi2gRq0ifYCWOIJdF78XlubESgxlrwgGOOSN3Ko-_pHBwgBAsHsnWjT1D_vCfH6-V5vjUvjO8CKQ9LLBjWNAimW6kk67T47_oCHJ01vQ</recordid><startdate>20240730</startdate><enddate>20240730</enddate><creator>Chen, Zhuo</creator><creator>Ma, De</creator><creator>Jin, Xiaofei</creator><creator>Xing, Qinghui</creator><creator>Jin, Ouwen</creator><creator>Du, Xin</creator><creator>He, Shuibing</creator><creator>Pan, Gang</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240730</creationdate><title>An Asynchronous Multi-core Accelerator for SNN inference</title><author>Chen, Zhuo ; Ma, De ; Jin, Xiaofei ; Xing, Qinghui ; Jin, Ouwen ; Du, Xin ; He, Shuibing ; Pan, Gang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30864540783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer architecture</topic><topic>Energy efficiency</topic><topic>Neural networks</topic><topic>State-of-the-art reviews</topic><topic>Synchronism</topic><topic>Time synchronization</topic><topic>Workload</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Zhuo</creatorcontrib><creatorcontrib>Ma, De</creatorcontrib><creatorcontrib>Jin, Xiaofei</creatorcontrib><creatorcontrib>Xing, Qinghui</creatorcontrib><creatorcontrib>Jin, Ouwen</creatorcontrib><creatorcontrib>Du, Xin</creatorcontrib><creatorcontrib>He, Shuibing</creatorcontrib><creatorcontrib>Pan, Gang</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Zhuo</au><au>Ma, De</au><au>Jin, Xiaofei</au><au>Xing, Qinghui</au><au>Jin, Ouwen</au><au>Du, Xin</au><au>He, Shuibing</au><au>Pan, Gang</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>An Asynchronous Multi-core Accelerator for SNN inference</atitle><jtitle>arXiv.org</jtitle><date>2024-07-30</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Spiking Neural Networks (SNNs) are extensively utilized in brain-inspired computing and neuroscience research. To enhance the speed and energy efficiency of SNNs, several many-core accelerators have been developed. However, maintaining the accuracy of SNNs often necessitates frequent explicit synchronization among all cores, which presents a challenge to overall efficiency. In this paper, we propose an asynchronous architecture for Spiking Neural Networks (SNNs) that eliminates the need for inter-core synchronization, thus enhancing speed and energy efficiency. This approach leverages the pre-determined dependencies of neuromorphic cores established during compilation. Each core is equipped with a scheduler that monitors the status of its dependencies, allowing it to safely advance to the next timestep without waiting for other cores. This eliminates the necessity for global synchronization and minimizes core waiting time despite inherent workload imbalances. Comprehensive evaluations using five different SNN workloads show that our architecture achieves a 1.86x speedup and a 1.55x increase in energy efficiency compared to state-of-the-art synchronization architectures.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_3086454078
source Free E- Journals
subjects Computer architecture
Energy efficiency
Neural networks
State-of-the-art reviews
Synchronism
Time synchronization
Workload
title An Asynchronous Multi-core Accelerator for SNN inference
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T15%3A12%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=An%20Asynchronous%20Multi-core%20Accelerator%20for%20SNN%20inference&rft.jtitle=arXiv.org&rft.au=Chen,%20Zhuo&rft.date=2024-07-30&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3086454078%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3086454078&rft_id=info:pmid/&rfr_iscdi=true