Supervised Learning with First-to-Spike Decoding in Multilayer Spiking Neural Networks
Experimental studies support the notion of spike-based neuronal information processing in the brain, with neural circuits exhibiting a wide range of temporally-based coding strategies to rapidly and efficiently represent sensory stimuli. Accordingly, it would be desirable to apply spike-based comput...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Experimental studies support the notion of spike-based neuronal information
processing in the brain, with neural circuits exhibiting a wide range of
temporally-based coding strategies to rapidly and efficiently represent sensory
stimuli. Accordingly, it would be desirable to apply spike-based computation to
tackling real-world challenges, and in particular transferring such theory to
neuromorphic systems for low-power embedded applications. Motivated by this, we
propose a new supervised learning method that can train multilayer spiking
neural networks to solve classification problems based on a rapid,
first-to-spike decoding strategy. The proposed learning rule supports multiple
spikes fired by stochastic hidden neurons, and yet is stable by relying on
first-spike responses generated by a deterministic output layer. In addition to
this, we also explore several distinct, spike-based encoding strategies in
order to form compact representations of presented input data. We demonstrate
the classification performance of the learning rule as applied to several
benchmark datasets, including MNIST. The learning rule is capable of
generalising from the data, and is successful even when used with constrained
network architectures containing few input and hidden layer neurons.
Furthermore, we highlight a novel encoding strategy, termed `scanline
encoding', that can transform image data into compact spatiotemporal patterns
for subsequent network processing. Designing constrained, but optimised,
network structures and performing input dimensionality reduction has strong
implications for neuromorphic applications. |
---|---|
DOI: | 10.48550/arxiv.2008.06937 |