Absolute stability of analytic neural networks: an approach based on finite trajectory length

A neural network (NN) is said to be convergent (or completely stable) when each trajectory tends to an equilibrium point (a stationary state). A stronger property is that of absolute stability, which means that convergence holds for any choice of the neural network parameters, and any choice of the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems. 1, Fundamental theory and applications Fundamental theory and applications, 2004-12, Vol.51 (12), p.2460-2469
Hauptverfasser: Forti, M., Tesi, A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A neural network (NN) is said to be convergent (or completely stable) when each trajectory tends to an equilibrium point (a stationary state). A stronger property is that of absolute stability, which means that convergence holds for any choice of the neural network parameters, and any choice of the nonlinear functions, within specified and well characterized sets. In particular, the property of absolute stability requires that the NN be convergent also when, for some parameter values, it possesses nonisolated equilibrium points (e.g., a manifold of equilibria). Such a property, which is really well suited for solving several classes of signal processing tasks in real time, cannot be in general established via the classical LaSalle approach, due to its inherent limitations to study convergence in situations where the NN has nonisolated equilibrium points. A method to address absolute stability is developed, based on proving that the total length of the NN trajectories is finite. A fundamental result on absolute stability is given, under the hypothesis that the NN possesses a Lyapunov function, and the nonlinearities involved (neuron activations, inhibitions, etc.) are modeled by analytic functions. At the core of the proof of finiteness of trajectory length is the use of some basic inequalities for analytic functions due to Lojasiewicz. The result is applicable to a large class of neural networks, which includes the networks proposed by Vidyasagar, the Hopfield neural networks, and the standard cellular NN introduced by Chua and Yang.
ISSN:1549-8328
1057-7122
1558-0806
DOI:10.1109/TCSI.2004.838143