New prior distribution for Bayesian neural network and learning via Hamiltonian Monte Carlo

A prior distribution of weights for Multilayer feedforward neural network in Bayesian point of view plays a central role toward generalization. In this context, we propose a new prior law for weights parameters which motivate the network regularization more than l 1 and l 2 early proposed. To train...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Evolving systems 2020-12, Vol.11 (4), p.661-671
Hauptverfasser: Ramchoun, Hassan, Ettaouil, Mohamed
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A prior distribution of weights for Multilayer feedforward neural network in Bayesian point of view plays a central role toward generalization. In this context, we propose a new prior law for weights parameters which motivate the network regularization more than l 1 and l 2 early proposed. To train the network, we have based on Hamiltonian Monte Carlo, it is used to simulate the prior and the posterior distribution. The generated samples are used to approximate the gradient of the evidence which allows to re-estimate the hyperparameters that balance a trade off between the likelihood term and regularized term, on the other hand we use the obtained posterior samples to estimate the network output. The case problem studied in this paper includes a regression and classification tasks. The obtained results illustrate the advantages of our approach in term of error rate compared to old approach, unfortunately our method consomme time before convergence.
ISSN:1868-6478
1868-6486
DOI:10.1007/s12530-019-09288-3