An efficient learning algorithm for improving generalization performance of radial basis function neural networks

This paper presents an efficient recursive learning algorithm for improving generalization performance of radial basis function (RBF) neural networks. The approach combines the rival penalized competitive learning (PRCL) [Xu, L., Kizyzak, A. & Oja, E. (1993). Rival penalized competitive learning...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2000-05, Vol.13 (4), p.545-553
Hauptverfasser: Wang, Zheng-ou, Zhu, Tao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents an efficient recursive learning algorithm for improving generalization performance of radial basis function (RBF) neural networks. The approach combines the rival penalized competitive learning (PRCL) [Xu, L., Kizyzak, A. & Oja, E. (1993). Rival penalized competitive learning for clustering analysis, RBF net and curve detection, IEEE Transactions on Neural Networks, 4, 636–649] and the regularized least squares (RLS) to provide an efficient and powerful procedure for constructing a minimal RBF network that generalizes very well. The RPCL selects the number of hidden units of network and adjusts centers, while the RLS constructs the parsimonious network and estimates the connection weights. In the RLS we derived a simple recursive algorithm, which needs no matrix calculation, and so largely reduces the computational cost. This combined algorithm significantly enhances the generalization performance and the real-time capability of the RBF networks. Simulation results of three different problems demonstrate much better generalization performance of the present algorithm over other existing similar algorithms.
ISSN:0893-6080
1879-2782
DOI:10.1016/S0893-6080(00)00029-0