On the Kalman filtering method in neural network training and pruning

In the use of the extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems of how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial conditi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on neural networks 1999-01, Vol.10 (1), p.161-166
Hauptverfasser: Sum, J., Chi-Sing Leung, Young, G.H., Wing-Kay Kan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the use of the extended Kalman filter approach in training and pruning a feedforward neural network, one usually encounters the problems of how to set the initial condition and how to use the result obtained to prune a neural network. In this paper, some cues on the setting of the initial condition are presented with a simple example illustrated. Then based on three assumptions: 1) the size of training set is large enough; 2) the training is able to converge; and 3) the trained network model is close to the actual one, an elegant equation linking the error sensitivity measure (the saliency) and the result obtained via an extended Kalman filter is devised. The validity of the devised equation is then testified by a simulated example.
ISSN:1045-9227
1941-0093
DOI:10.1109/72.737502