On the Study of Hybridized online/batch quasi-Newton Training for Feedforward Neural Networks
Various techniques based on the gradient descent method have been studied as training algorithms for neural networks. Neural network training poses data-driven optimization problems in which the objective function involves the summation of loss terms over a set of data to be modeled. For a given tra...
Gespeichert in:
Veröffentlicht in: | Journal of Signal Processing 2012/09/30, Vol.16(5), pp.451-458 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng ; jpn |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Various techniques based on the gradient descent method have been studied as training algorithms for neural networks. Neural network training poses data-driven optimization problems in which the objective function involves the summation of loss terms over a set of data to be modeled. For a given training data set, the gradient-based algorithm operates in one of two modes: online (stochastic) or batch. In this paper, a robust training algorithm is proposed, combining "online" mode with "batch" one. The validity of the proposed algorithm is demonstrated through computer simulations compared with the previous quasi-Newton based training methods. |
---|---|
ISSN: | 1342-6230 1880-1013 |
DOI: | 10.2299/jsp.16.451 |