Bayesian decision theory on three-layer neural networks

We discuss the Bayesian decision theory on neural networks. In the two-category case where the state-conditional probabilities are normal, a three-layer neural network having d hidden layer units can approximate the posterior probability in L p ( R d , p ) , where d is the dimension of the space of...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neurocomputing (Amsterdam) 2005, Vol.63, p.209-228
Hauptverfasser: Ito, Yoshifusa, Srinivasan, Cidambi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We discuss the Bayesian decision theory on neural networks. In the two-category case where the state-conditional probabilities are normal, a three-layer neural network having d hidden layer units can approximate the posterior probability in L p ( R d , p ) , where d is the dimension of the space of observables. We extend this result to multicategory cases. Then, the number of the hidden layer units must be increased, but can be bounded by 1 2 d ( d + 1 ) irrespective of the number of categories if the neural network has direct connections between the input and output layers. In the case where the state-conditional probability is one of familiar probability distributions such as binomial, multinomial, Poisson, negative binomial distributions and so on, a two-layer neural network can approximate the posterior probability.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2004.05.005