Unsupervised learning of sigmoid perceptron

A previous paper has derived a clustering-based upper bound on mean squared output error of radial basis function networks that explicitly depends on the network parameters. In this study we focus on single-hidden-layer-sigmoid perceptron. Using the analysis of the previous paper, this paper (i) pre...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Uykan, Z., Koivo, H.N.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A previous paper has derived a clustering-based upper bound on mean squared output error of radial basis function networks that explicitly depends on the network parameters. In this study we focus on single-hidden-layer-sigmoid perceptron. Using the analysis of the previous paper, this paper (i) presents a similar upper bound on output error of the sigmoid perceptron and the upper bound can be made arbitrarily small by increasing the number of sigmoid units, and (ii) proposes unsupervised type learning of input-layer (synaptic) weights in contrast to traditional gradient-descent type supervised learning, i.e., the proposed method minimizes the upper bound by a clustering algorithm for determining the input-layer weights in contrast to the gradient-descent type algorithm minimizing the output error, which is traditionally used in the design of the perceptron. The simulation results show that (i) the proposed hierarchical method requires less time for learning when compared to gradient-descent-type supervised algorithm, (ii) it yields comparable performance in comparison with radial basis function network, and (iii) the upper bounds minimized during the clustering are quite tight to the output error function.
ISSN:1520-6149
2379-190X
DOI:10.1109/ICASSP.2000.860152