Deep Pulse-Coupled Neural Networks
Spiking Neural Networks (SNNs) capture the information processing mechanism of the brain by taking advantage of spiking neurons, such as the Leaky Integrate-and-Fire (LIF) model neuron, which incorporates temporal dynamics and transmits information via discrete and asynchronous spikes. However, the...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Spiking Neural Networks (SNNs) capture the information processing mechanism
of the brain by taking advantage of spiking neurons, such as the Leaky
Integrate-and-Fire (LIF) model neuron, which incorporates temporal dynamics and
transmits information via discrete and asynchronous spikes. However, the
simplified biological properties of LIF ignore the neuronal coupling and
dendritic structure of real neurons, which limits the spatio-temporal dynamics
of neurons and thus reduce the expressive power of the resulting SNNs. In this
work, we leverage a more biologically plausible neural model with complex
dynamics, i.e., a pulse-coupled neural network (PCNN), to improve the
expressiveness and recognition performance of SNNs for vision tasks. The PCNN
is a type of cortical model capable of emulating the complex neuronal
activities in the primary visual cortex. We construct deep pulse-coupled neural
networks (DPCNNs) by replacing commonly used LIF neurons in SNNs with PCNN
neurons. The intra-coupling in existing PCNN models limits the coupling between
neurons only within channels. To address this limitation, we propose
inter-channel coupling, which allows neurons in different feature maps to
interact with each other. Experimental results show that inter-channel coupling
can efficiently boost performance with fewer neurons, synapses, and less
training time compared to widening the networks. For instance, compared to the
LIF-based SNN with wide VGG9, DPCNN with VGG9 uses only 50%, 53%, and 73% of
neurons, synapses, and training time, respectively. Furthermore, we propose
receptive field and time dependent batch normalization (RFTD-BN) to speed up
the convergence and performance of DPCNNs. |
---|---|
DOI: | 10.48550/arxiv.2401.08649 |