Artificial neural networks, back propagation, and the Kelley-Bryson gradient procedure

Upon the concatenation of all the cases to be learned in a neural-net mapping problem into a single large network with a vector output (one component per case), a standard discrete-time optimal control problem is obtained. The Kelley-Bryson (1960, 1962) gradient formulas for such problems have been...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of guidance, control, and dynamics control, and dynamics, 1990-09, Vol.13 (5), p.926-928
1. Verfasser: DREYFUS, STUART E.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Upon the concatenation of all the cases to be learned in a neural-net mapping problem into a single large network with a vector output (one component per case), a standard discrete-time optimal control problem is obtained. The Kelley-Bryson (1960, 1962) gradient formulas for such problems have been rediscovered by neural-network researchers and elaborated under the rubric of 'back propagation'. The recursive derivation of these formulas on the basis of the chain rule, as commonly encountered in the neural-network literature, was initially employed for optimal-control problems by Dreyfus (1962). (O.C.)
ISSN:0731-5090
1533-3884
DOI:10.2514/3.25422