New implicit updates in multi-step quasi-Newton methods for unconstrained optimisation
Multi-step quasi-Newton methods for optimisation (using data from more than one previous step to revise the current approximate Hessian) were introduced by Ford and Moghrabi in (J. Comput. Appl. Math. 50 (1994) 305), where they showed how to construct such methods by means of interpolating curves. T...
Gespeichert in:
Veröffentlicht in: | Journal of computational and applied mathematics 2003-03, Vol.152 (1), p.133-146 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-step quasi-Newton methods for optimisation (using data from more than one previous step to revise the current approximate Hessian) were introduced by Ford and Moghrabi in (J. Comput. Appl. Math. 50 (1994) 305), where they showed how to construct such methods by means of interpolating curves. These methods also utilise standard quasi-Newton formulae, but with the vectors normally employed in the formulae replaced by others determined from a multi-step version of the secant equation. Some methods (the ‘accumulative’ and ‘fixed-point’ approaches) for defining the parameter values, which correspond to the iterates on the interpolating curve, were presented by Ford and Moghrabi in (Optim. Methods Software 2 (1993) 357). Both the accumulative and the fixed-point methods measure the distances required to parameterise the interpolating polynomials via a norm defined by a positive-definite matrix
M. The fixed-point algorithm which takes
M to be the current approximate Hessian was found, experimentally, to be the best of the six multi-step methods studied in Ford and Moghrabi (1993) (all of which exhibited improved numerical performance by comparison with the standard single-step BFGS method).
To produce a better parameterisation of the interpolation, Ford (Comput. Math. Appl. 42 (2001) 1083) developed the idea of ‘implicit update’ methods. The fundamental concept here is to determine an ‘improved’ version of the Hessian approximation to be used in computing the metric, while avoiding the computational expense of actually calculating the improved version. Two implicit methods (denoted by I2 and I3) were developed from F2 in Ford (2001). The method I2 employed parameter values generated from an implicit
single-step BFGS update, while I3 used values from an implicit
two-step update. In this paper, we describe the derivation of new implicit updates which are similar to I3. The experimental results we present show that one of the new implicit methods produces markedly better performance than the existing implicit methods, particularly as the dimension of the test problem grows. |
---|---|
ISSN: | 0377-0427 1879-1778 |
DOI: | 10.1016/S0377-0427(02)00701-X |