A More Accurate Approximation of Activation Function with Few Spikes Neurons
Recent deep neural networks (DNNs), such as diffusion models [1], have faced high computational demands. Thus, spiking neural networks (SNNs) have attracted lots of attention as energy-efficient neural networks. However, conventional spiking neurons, such as leaky integrate-and-fire neurons, cannot...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent deep neural networks (DNNs), such as diffusion models [1], have faced
high computational demands. Thus, spiking neural networks (SNNs) have attracted
lots of attention as energy-efficient neural networks. However, conventional
spiking neurons, such as leaky integrate-and-fire neurons, cannot accurately
represent complex non-linear activation functions, such as Swish [2]. To
approximate activation functions with spiking neurons, few spikes (FS) neurons
were proposed [3], but the approximation performance was limited due to the
lack of training methods considering the neurons. Thus, we propose
tendency-based parameter initialization (TBPI) to enhance the approximation of
activation function with FS neurons, exploiting temporal dependencies
initializing the training parameters. |
---|---|
DOI: | 10.48550/arxiv.2409.00044 |