Structural stability of unsupervised learning in feedback neural networks

Structural stability is proved for a large class of unsupervised nonlinear feedback neural networks, adaptive bidirectional associative memory (ABAM) models. The approach extends the ABAM models to the random-process domain as systems of stochastic differential equations and appends scaled Brownian...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automatic control 1991-07, Vol.36 (7), p.785-792
1. Verfasser: Kosko, B.A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Structural stability is proved for a large class of unsupervised nonlinear feedback neural networks, adaptive bidirectional associative memory (ABAM) models. The approach extends the ABAM models to the random-process domain as systems of stochastic differential equations and appends scaled Brownian diffusions. It is also proved that this much larger family of models, random ABAM (RABAM) models, is globally stable. Intuitively, RABAM equilibria equal ABAM equilibria that vibrate randomly. The ABAM family includes many unsupervised feedback and feedforward neural models. All RABAM models permit Brownian annealing. The RABAM noise suppression theorem characterizes RABAM system vibration. The mean-squared activation and synaptic velocities decrease exponentially to their lower hounds, the respective temperature-scaled noise variances. The many neuronal and synaptic parameters missing from such neural network models are included, but as net random unmodeled effects. They do not affect the structure of real-time global computations.< >
ISSN:0018-9286
1558-2523
DOI:10.1109/9.85058