Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points

In this paper we prove global convergence for first- and second-order stationary points of a class of derivative-free trust-region methods for unconstrained optimization. These methods are based on the sequential minimization of quadratic (or linear) models built from evaluating the objective functi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:SIAM journal on optimization 2009-01, Vol.20 (1), p.387-415
Hauptverfasser: Conn, Andrew R., Scheinberg, Katya, Vicente, Luís N.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper we prove global convergence for first- and second-order stationary points of a class of derivative-free trust-region methods for unconstrained optimization. These methods are based on the sequential minimization of quadratic (or linear) models built from evaluating the objective function at sample sets. The derivative-free models are required to satisfy Taylor-type bounds, but, apart from that, the analysis is independent of the sampling techniques. A number of new issues are addressed, including global convergence when acceptance of iterates is based on simple decrease of the objective function, trust-region radius maintenance at the criticality step, and global convergence for second-order critical points. [PUBLICATION ABSTRACT]
ISSN:1052-6234
1095-7189
DOI:10.1137/060673424