An Efficient Generalization of Battiti-Shanno’s Quasi-Newton Algorithm for Learning in MLP-Networks
This paper presents a novel Quasi-Newton method for the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti’s well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational e...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Buchkapitel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a novel Quasi-Newton method for the minimization of the error function of a feed-forward neural network. The method is a generalization of Battiti’s well known OSS algorithm. The aim of the proposed approach is to achieve a significant improvement both in terms of computational effort and in the capability of evaluating the global minimum of the error function. The technique described in this work is founded on the innovative concept of “convex algorithm” in order to avoid possible entrapments into local minima. Convergence results as well numerical experiences are presented. |
---|---|
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-540-30499-9_74 |