A partial analysis of stochastic convergence in a generalized two-layer perceptron with backpropagation learning

The authors study the stationary points of a two-layer perceptron which attempts to identify the parameters of a specific stochastic nonlinear system. The training sequence is modeled as the output of the nonlinear system, with an input comprising an independent sequence of zero mean Gaussian vector...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Vaughn, J.L., Bershad, N.J., Shynk, J.J.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The authors study the stationary points of a two-layer perceptron which attempts to identify the parameters of a specific stochastic nonlinear system. The training sequence is modeled as the output of the nonlinear system, with an input comprising an independent sequence of zero mean Gaussian vectors with independent components. The training rule is a limiting case of backpropagation (to simplify the analysis). Equations are given which define the stationary points of the algorithm for an arbitrary output nonlinearity g(x). The solutions to these equations for the outer layer show that, for a continuous g(x), there is a unique solution for the outer layer weights for any given set of fixed hidden layer weights. These solutions do not necessarily yield zero error. However, if the hidden layer weights are also trained, the unique solution for zero error requires that the parameters of the two-layer perceptron exactly match that of the nonlinear system.< >
DOI:10.1109/NNSP.1992.253660