An efficient hybrid conjugate gradient method for unconstrained optimization

Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient meth...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Annals of operations research 2001-01, Vol.103 (1), p.33
Hauptverfasser: Dai, Y H, Yuan, Y
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recently, we propose a nonlinear conjugate gradient method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study methods related to the new nonlinear conjugate gradient method. Specifically, if the size of the scalar [beta]k with respect to the one in the new method belongs to some interval, then the corresponding methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the methods need not converge. Numerical experiments are made for two combinations of the new method and the Hestenes-Stiefel conjugate gradient method. The initial results show that, one of the hybrid methods is especially efficient for the given test problems. [PUBLICATION ABSTRACT]
ISSN:0254-5330
1572-9338
DOI:10.1023/A:1012930416777