SPSNN: n th Order Sequence-Predicting Spiking Neural Network

We introduce a means of harnessing spiking neural networks (SNNs) with rich dynamics as a dynamic hypothesis to learn complex sequences. The proposed SNN is referred to as [Formula Omitted]th order sequence-predicting SNN ([Formula Omitted]-SPSNN), which is capable of single-step prediction and sequ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.110523-110534
Hauptverfasser: Kim, Dohun, Kornijcuk, Vladimir, Hwang, Cheol Seong, Jeong, Doo Seok
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We introduce a means of harnessing spiking neural networks (SNNs) with rich dynamics as a dynamic hypothesis to learn complex sequences. The proposed SNN is referred to as [Formula Omitted]th order sequence-predicting SNN ([Formula Omitted]-SPSNN), which is capable of single-step prediction and sequence-to-sequence prediction, i.e., associative recall. As a key to these capabilities, we propose a new learning algorithm, named the learning by backpropagating action potential (LbAP) algorithm, which features (i) postsynaptic event-driven learning, (ii) access to topological and temporal local data only, (iii) competition-induced weight normalization effect, and (iv) fast learning. Most importantly, the LbAP algorithm offers a unified learning framework over the entire SPSNN based on local data only. The learning capacity of the SPSNN is mainly dictated by the number of hidden neurons [Formula Omitted]; its prediction accuracy reaches its maximum value (~1) when the hidden neuron number [Formula Omitted] is larger than twice training sequence length [Formula Omitted], i.e., [Formula Omitted]. Another advantage is its high tolerance to errors in input encoding compared to the state-of-the-art sequence learning networks, namely long short-term memory (LSTM) and gated recurrent unit (GRU). Additionally, its efficiency in learning is approximately 100 times that of LSTM and GRU when measured in terms of the number of synaptic operations until successful training, which corresponds to multiply-accumulate operations for LSTM and GRU. This high efficiency arises from the higher learning rate of the SPSNN, which is attributed to the LbAP algorithm. The code is available on-line ( https://github.com/galactico7/SPSNN ).
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3001296