Bayesian Inference of Hidden Markov Models Through Probabilistic Boolean Operations in Spiking Neuronal Networks

Recurrent neural networks (RNN) have been extensively used to address the problem of Bayesian inference of a hidden Markov model (HMM). However, such artificial neural architectures are prone to computationally exhaustive training procedures and high energy dissipation. Spiking neural networks (SNNs...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on emerging topics in computational intelligence 2024-12, p.1-15
Hauptverfasser: Chakraborty, Ayan, Chakrabarti, Saswat
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recurrent neural networks (RNN) have been extensively used to address the problem of Bayesian inference of a hidden Markov model (HMM). However, such artificial neural architectures are prone to computationally exhaustive training procedures and high energy dissipation. Spiking neural networks (SNNs) are recently explored for performing similar tasks. An interesting problem on Bayesian inference of hidden Markov models (HMM) on SNN paradigm is addressed in this paper. A population based stochastic temporal encoding (PSTE) scheme has been introduced to establish that a spiking neuron behaves as a probabilistic Boolean operator. Using this property the posterior of a hidden state is mapped to probability of firing a logic HIGH by a spiking neuron. Two new algorithms are presented for fixing synaptic strengths denoted by a random variable q. The first algorithm uses a sigmoidal relationship from pre-statistical analysis to select the values of q such that the probability of a neuron producing a logic HIGH becomes equal to the posterior probability of a hidden state. The second algorithm considers data for appropriately determining q through in-network-training. It has been demonstrated that Bayesian inference of both two-state HMMs as well as multi-state HMMs are implementable using the concept of PSTE. Two examples are presented, one on inferring the trend of a time series and the other related to deciphering the correct digit of a seven segment LED display with noisy bits. Our framework has performed very closely with traditional Bayesian inference (difference in accuracy < 2\%) and traditional RNNs.
ISSN:2471-285X
2471-285X
DOI:10.1109/TETCI.2024.3502472