Optimal algorithms for well-conditioned nonlinear systems of equations
We propose solving nonlinear systems of equations by function optimization and we give an optimal algorithm which relies on a special canonical form of gradient descent. The algorithm can be applied under certain assumptions on the function to be optimized, that is, an upper bound must exist for the...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on computers 2001-07, Vol.50 (7), p.689-698 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose solving nonlinear systems of equations by function optimization and we give an optimal algorithm which relies on a special canonical form of gradient descent. The algorithm can be applied under certain assumptions on the function to be optimized, that is, an upper bound must exist for the norm of the Hessian, whereas the norm of the gradient must be lower bounded. Due to its intrinsic structure, the algorithm looks particularly appealing for a parallel implementation. As a particular case, more specific results are given for linear systems. We prove that reaching a solution with a degree of precision /spl epsiv/ takes /spl Theta/(n/sup 2/k/sup 2/ log /sup k///sub /spl epsiv//), k being the condition number of A and n the problem dimension. Related results hold for systems of quadratic equations for which an estimation for the requested bounds can be devised. Finally, we report numerical results in order to establish the actual computational burden of the proposed method and to assess its performances with respect to classical algorithms for solving linear and quadratic equations. |
---|---|
ISSN: | 0018-9340 1557-9956 |
DOI: | 10.1109/12.936235 |