On the optimal number of hidden nodes in a neural network

In this study we show, empirically, that the best performance of a neural network occurs when the number of hidden nodes is equal to log(T), where T is the number of training samples. This value represents the optimal performance of the neural network as well as the optimal associated computational...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wanas, N., Auda, G., Kamel, M.S., Karray, F.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this study we show, empirically, that the best performance of a neural network occurs when the number of hidden nodes is equal to log(T), where T is the number of training samples. This value represents the optimal performance of the neural network as well as the optimal associated computational cost. We also show that the measure of entropy in the hidden layer not only gives a good foresight to the performance of the neural network, but can be used as a criteria to optimize the neural network as well. This can be achieved by minimizing the network entropy (i.e. maximizing the entropy in the hidden layer) as a means of modifying the weights of the neural network.
ISSN:0840-7789
2576-7046
DOI:10.1109/CCECE.1998.685648