Training a neural network with conjugate gradient methods

This study investigates the use of several variants of conjugate gradient (CG) optimisation and line search methods to accelerate the convergence of an MLP neural network learning two medical signal classification problems. Much of the previous work has been done with artificial problems which have...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Towsey, M., Alpsan, D., Sztriha, L.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This study investigates the use of several variants of conjugate gradient (CG) optimisation and line search methods to accelerate the convergence of an MLP neural network learning two medical signal classification problems. Much of the previous work has been done with artificial problems which have little relevance to real world problems and results on real world problems have been variable. The effectiveness of CG compared to standard backpropagation (BP) depended on the degree to which the learning task required finding a global minimum. If learning was stopped when the training set had been learned to an acceptable degree of error tolerance (the typical pattern classification problem), standard BP was faster than CG and did not display the convergence difficulties usually attributed to it. If learning required finding a global minimum (as in function minimisation or function estimation tasks), CG methods were faster but performance was very much dependent on careful selection of 'tuning' parameters and line search. This requirement for meta-optimisation was more difficult for CG than for BP because of the larger number of parameters.
DOI:10.1109/ICNN.1995.488128