A parallel topological feature map in APL

One can distinguish two different approaches of neural networks the supervised networks and the self organizing or unsupervised neural networks. The first type of neural nets is supplied with an ideal result regarding the input. During the learning procedure, the neural net adjusts weighting factors...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:APL quote quad 1993-09, Vol.24 (1), p.97-103
Hauptverfasser: Frey, J., Scheppelmann, D., Glombitza, G.-P., Meinzer, H.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:One can distinguish two different approaches of neural networks the supervised networks and the self organizing or unsupervised neural networks. The first type of neural nets is supplied with an ideal result regarding the input. During the learning procedure, the neural net adjusts weighting factors of the links between neurons so that the input feature vectors map to the ideal output. Those nets are used for example in robotics, where the ideal result is well known: it is the position the robot should be placed in. For the cases where no ideal result is known, the second type of neural nets, the so called self-learning Topological Feature Map (TFM) is appropriate. This paper will introduce such a neural net based on the idea of Kohonen's TFM. The original algorithm was extremely sequential and therefore not suitable for an APL implementation. The parallelization of the algorithm led to important improvements in speed and convergence to the global optimum.
ISSN:0163-6006
DOI:10.1145/166198.166210