A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks

Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures. However, due to the nondifferentiable nature of spiking neuronal functions, the standard error backpropagation algorithm is not directly applicable to SNNs...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2023-01, Vol.34 (1), p.446-460
Hauptverfasser: Wu, Jibin, Chua, Yansong, Zhang, Malu, Li, Guoqi, Li, Haizhou, Tan, Kay Chen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures. However, due to the nondifferentiable nature of spiking neuronal functions, the standard error backpropagation algorithm is not directly applicable to SNNs. In this work, we propose a tandem learning framework that consists of an SNN and an artificial neural network (ANN) coupled through weight sharing. The ANN is an auxiliary structure that facilitates the error backpropagation for the training of the SNN at the spike-train level. To this end, we consider the spike count as the discrete neural representation in the SNN and design an ANN neuronal activation function that can effectively approximate the spike count of the coupled SNN. The proposed tandem learning rule demonstrates competitive pattern recognition and regression capabilities on both the conventional frame- and event-based vision datasets, with at least an order of magnitude reduced inference time and total synaptic operations over other state-of-the-art SNN implementations. Therefore, the proposed tandem learning rule offers a novel solution to training efficient, low latency, and high-accuracy deep SNNs with low computing resources.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2021.3095724