KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function

Spiking neural networks (SNNs) have garnered significant attention owing to their adeptness in processing temporal information, low power consumption, and enhanced biological plausibility. Despite these advantages, the development of efficient and high-performing learning algorithms for SNNs remains...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computation 2024-11, Vol.36 (12), p.2636-2650
Hauptverfasser: Jiang, Chunming, Zhang, Yilei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2650
container_issue 12
container_start_page 2636
container_title Neural computation
container_volume 36
creator Jiang, Chunming
Zhang, Yilei
description Spiking neural networks (SNNs) have garnered significant attention owing to their adeptness in processing temporal information, low power consumption, and enhanced biological plausibility. Despite these advantages, the development of efficient and high-performing learning algorithms for SNNs remains a formidable challenge. Techniques such as artificial neural network (ANN)-to-SNN conversion can convert ANNs to SNNs with minimal performance loss, but they necessitate prolonged simulations to approximate rate coding accurately. Conversely, the direct training of SNNs using spike-based backpropagation (BP), such as surrogate gradient approximation, is more flexible and widely adopted. Nevertheless, our research revealed that the shape of the surrogate gradient function profoundly influences the training and inference accuracy of SNNs. Importantly, we identified that the shape of the surrogate gradient function significantly affects the final training accuracy. The shape of the surrogate gradient function is typically manually selected before training and remains static throughout the training process. In this article, we introduce a novel k-based leaky integrate-and-fire (KLIF) spiking neural model. KLIF, featuring a learnable parameter, enables the dynamic adjustment of the height and width of the effective surrogate gradient near threshold during training. Our proposed model undergoes evaluation on static CIFAR-10 and CIFAR-100 data sets, as well as neuromorphic CIFAR10-DVS and DVS128-Gesture data sets. Experimental results demonstrate that KLIF outperforms the leaky Integrate-and-Fire (LIF) model across multiple data sets and network architectures. The superior performance of KLIF positions it as a viable replacement for the essential role of LIF in SNNs across diverse tasks.
doi_str_mv 10.1162/neco_a_01712
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3108763022</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3108763022</sourcerecordid><originalsourceid>FETCH-LOGICAL-c206t-409513cebd6c0a381af5c89e150fb11726b6805715f3d51d76ff1af968e31d5a3</originalsourceid><addsrcrecordid>eNpd0MFLwzAUBvAgipvTm2cJePFgNS9Z0tTbGG4OpztsA28hbdORuSYzbQ_619uxKeLpwePHx8eH0CWQOwBB753JvNKKQAz0CHWBMxJJKd-OUZfIJIliIeIOOquqNSFEAOGnqMMSBrSfQBe9PE8nowc8cHi2rW1pv0yO51v7bt0Kv5omeIeXzta48AEvGrd7z5sQ_ErXBo-Dzq1xNR41Lqutd-fopNCbylwcbg8tR4-L4VM0nY0nw8E0yigRddQnCQeWmTQXGdFMgi54JhMDnBQpQExFKiThMfCC5RzyWBRFaxIhDYOca9ZDN_vcbfAfjalqVdoqM5uNdsY3lWJAZCwYobSl1__o2jfBte1axWSfxi1t1e1eZcFXVTCF2gZb6vCpgKjdzOrvzC2_OoQ2aWnyX_yzK_sGrSR3aw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3138427087</pqid></control><display><type>article</type><title>KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function</title><source>MIT Press Journals</source><creator>Jiang, Chunming ; Zhang, Yilei</creator><creatorcontrib>Jiang, Chunming ; Zhang, Yilei</creatorcontrib><description>Spiking neural networks (SNNs) have garnered significant attention owing to their adeptness in processing temporal information, low power consumption, and enhanced biological plausibility. Despite these advantages, the development of efficient and high-performing learning algorithms for SNNs remains a formidable challenge. Techniques such as artificial neural network (ANN)-to-SNN conversion can convert ANNs to SNNs with minimal performance loss, but they necessitate prolonged simulations to approximate rate coding accurately. Conversely, the direct training of SNNs using spike-based backpropagation (BP), such as surrogate gradient approximation, is more flexible and widely adopted. Nevertheless, our research revealed that the shape of the surrogate gradient function profoundly influences the training and inference accuracy of SNNs. Importantly, we identified that the shape of the surrogate gradient function significantly affects the final training accuracy. The shape of the surrogate gradient function is typically manually selected before training and remains static throughout the training process. In this article, we introduce a novel k-based leaky integrate-and-fire (KLIF) spiking neural model. KLIF, featuring a learnable parameter, enables the dynamic adjustment of the height and width of the effective surrogate gradient near threshold during training. Our proposed model undergoes evaluation on static CIFAR-10 and CIFAR-100 data sets, as well as neuromorphic CIFAR10-DVS and DVS128-Gesture data sets. Experimental results demonstrate that KLIF outperforms the leaky Integrate-and-Fire (LIF) model across multiple data sets and network architectures. The superior performance of KLIF positions it as a viable replacement for the essential role of LIF in SNNs across diverse tasks.</description><identifier>ISSN: 0899-7667</identifier><identifier>ISSN: 1530-888X</identifier><identifier>EISSN: 1530-888X</identifier><identifier>DOI: 10.1162/neco_a_01712</identifier><identifier>PMID: 39312491</identifier><language>eng</language><publisher>United States: MIT Press Journals, The</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Back propagation networks ; Datasets ; Machine learning ; Neural networks ; Parameter identification ; Spiking</subject><ispartof>Neural computation, 2024-11, Vol.36 (12), p.2636-2650</ispartof><rights>2024 Massachusetts Institute of Technology.</rights><rights>Copyright MIT Press Journals, The 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c206t-409513cebd6c0a381af5c89e150fb11726b6805715f3d51d76ff1af968e31d5a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39312491$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Jiang, Chunming</creatorcontrib><creatorcontrib>Zhang, Yilei</creatorcontrib><title>KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function</title><title>Neural computation</title><addtitle>Neural Comput</addtitle><description>Spiking neural networks (SNNs) have garnered significant attention owing to their adeptness in processing temporal information, low power consumption, and enhanced biological plausibility. Despite these advantages, the development of efficient and high-performing learning algorithms for SNNs remains a formidable challenge. Techniques such as artificial neural network (ANN)-to-SNN conversion can convert ANNs to SNNs with minimal performance loss, but they necessitate prolonged simulations to approximate rate coding accurately. Conversely, the direct training of SNNs using spike-based backpropagation (BP), such as surrogate gradient approximation, is more flexible and widely adopted. Nevertheless, our research revealed that the shape of the surrogate gradient function profoundly influences the training and inference accuracy of SNNs. Importantly, we identified that the shape of the surrogate gradient function significantly affects the final training accuracy. The shape of the surrogate gradient function is typically manually selected before training and remains static throughout the training process. In this article, we introduce a novel k-based leaky integrate-and-fire (KLIF) spiking neural model. KLIF, featuring a learnable parameter, enables the dynamic adjustment of the height and width of the effective surrogate gradient near threshold during training. Our proposed model undergoes evaluation on static CIFAR-10 and CIFAR-100 data sets, as well as neuromorphic CIFAR10-DVS and DVS128-Gesture data sets. Experimental results demonstrate that KLIF outperforms the leaky Integrate-and-Fire (LIF) model across multiple data sets and network architectures. The superior performance of KLIF positions it as a viable replacement for the essential role of LIF in SNNs across diverse tasks.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Back propagation networks</subject><subject>Datasets</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Parameter identification</subject><subject>Spiking</subject><issn>0899-7667</issn><issn>1530-888X</issn><issn>1530-888X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpd0MFLwzAUBvAgipvTm2cJePFgNS9Z0tTbGG4OpztsA28hbdORuSYzbQ_619uxKeLpwePHx8eH0CWQOwBB753JvNKKQAz0CHWBMxJJKd-OUZfIJIliIeIOOquqNSFEAOGnqMMSBrSfQBe9PE8nowc8cHi2rW1pv0yO51v7bt0Kv5omeIeXzta48AEvGrd7z5sQ_ErXBo-Dzq1xNR41Lqutd-fopNCbylwcbg8tR4-L4VM0nY0nw8E0yigRddQnCQeWmTQXGdFMgi54JhMDnBQpQExFKiThMfCC5RzyWBRFaxIhDYOca9ZDN_vcbfAfjalqVdoqM5uNdsY3lWJAZCwYobSl1__o2jfBte1axWSfxi1t1e1eZcFXVTCF2gZb6vCpgKjdzOrvzC2_OoQ2aWnyX_yzK_sGrSR3aw</recordid><startdate>20241119</startdate><enddate>20241119</enddate><creator>Jiang, Chunming</creator><creator>Zhang, Yilei</creator><general>MIT Press Journals, The</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope></search><sort><creationdate>20241119</creationdate><title>KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function</title><author>Jiang, Chunming ; Zhang, Yilei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c206t-409513cebd6c0a381af5c89e150fb11726b6805715f3d51d76ff1af968e31d5a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Back propagation networks</topic><topic>Datasets</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Parameter identification</topic><topic>Spiking</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jiang, Chunming</creatorcontrib><creatorcontrib>Zhang, Yilei</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>Neural computation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jiang, Chunming</au><au>Zhang, Yilei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function</atitle><jtitle>Neural computation</jtitle><addtitle>Neural Comput</addtitle><date>2024-11-19</date><risdate>2024</risdate><volume>36</volume><issue>12</issue><spage>2636</spage><epage>2650</epage><pages>2636-2650</pages><issn>0899-7667</issn><issn>1530-888X</issn><eissn>1530-888X</eissn><abstract>Spiking neural networks (SNNs) have garnered significant attention owing to their adeptness in processing temporal information, low power consumption, and enhanced biological plausibility. Despite these advantages, the development of efficient and high-performing learning algorithms for SNNs remains a formidable challenge. Techniques such as artificial neural network (ANN)-to-SNN conversion can convert ANNs to SNNs with minimal performance loss, but they necessitate prolonged simulations to approximate rate coding accurately. Conversely, the direct training of SNNs using spike-based backpropagation (BP), such as surrogate gradient approximation, is more flexible and widely adopted. Nevertheless, our research revealed that the shape of the surrogate gradient function profoundly influences the training and inference accuracy of SNNs. Importantly, we identified that the shape of the surrogate gradient function significantly affects the final training accuracy. The shape of the surrogate gradient function is typically manually selected before training and remains static throughout the training process. In this article, we introduce a novel k-based leaky integrate-and-fire (KLIF) spiking neural model. KLIF, featuring a learnable parameter, enables the dynamic adjustment of the height and width of the effective surrogate gradient near threshold during training. Our proposed model undergoes evaluation on static CIFAR-10 and CIFAR-100 data sets, as well as neuromorphic CIFAR10-DVS and DVS128-Gesture data sets. Experimental results demonstrate that KLIF outperforms the leaky Integrate-and-Fire (LIF) model across multiple data sets and network architectures. The superior performance of KLIF positions it as a viable replacement for the essential role of LIF in SNNs across diverse tasks.</abstract><cop>United States</cop><pub>MIT Press Journals, The</pub><pmid>39312491</pmid><doi>10.1162/neco_a_01712</doi><tpages>15</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0899-7667
ispartof Neural computation, 2024-11, Vol.36 (12), p.2636-2650
issn 0899-7667
1530-888X
1530-888X
language eng
recordid cdi_proquest_miscellaneous_3108763022
source MIT Press Journals
subjects Accuracy
Algorithms
Artificial neural networks
Back propagation networks
Datasets
Machine learning
Neural networks
Parameter identification
Spiking
title KLIF: An Optimized Spiking Neuron Unit for Tuning Surrogate Gradient Function
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T03%3A41%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=KLIF:%20An%20Optimized%20Spiking%20Neuron%20Unit%20for%20Tuning%20Surrogate%20Gradient%20Function&rft.jtitle=Neural%20computation&rft.au=Jiang,%20Chunming&rft.date=2024-11-19&rft.volume=36&rft.issue=12&rft.spage=2636&rft.epage=2650&rft.pages=2636-2650&rft.issn=0899-7667&rft.eissn=1530-888X&rft_id=info:doi/10.1162/neco_a_01712&rft_dat=%3Cproquest_cross%3E3108763022%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3138427087&rft_id=info:pmid/39312491&rfr_iscdi=true