Back propagation through adjoints for the identification of nonlinear dynamic systems using recurrent neural models

In this paper, back propagation is reinvestigated for an efficient evaluation of the gradient in arbitrary interconnections of recurrent subsystems. It is shown that the error has to be back-propagated through the adjoint model of the system and that the gradient can only be obtained after a delay....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on neural networks 1994-03, Vol.5 (2), p.213-228
Hauptverfasser: Srinivasan, B., Prasad, U.R., Rao, N.J.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, back propagation is reinvestigated for an efficient evaluation of the gradient in arbitrary interconnections of recurrent subsystems. It is shown that the error has to be back-propagated through the adjoint model of the system and that the gradient can only be obtained after a delay. A faster version, accelerated back propagation, that eliminates this delay, is also developed. Various schemes including the sensitivity method are studied to update the weights of the network using these gradients. Motivated by the Lyapunov approach and the adjoint model, the predictive back propagation and its variant, targeted back propagation, are proposed. A further refinement, predictive back propagation with filtering is then developed, where the states of the model are also updated. The convergence of this scheme is assured. It is shown that it is sufficient to back propagate as many time steps as the order of the system for convergence. As a preamble, convergence of online batch and sample-wise updates in feedforward models is analyzed using the Lyapunov approach.< >
ISSN:1045-9227
1941-0093
DOI:10.1109/72.279186