A new algorithm for stochastic optimization

Classical stochastic optimization algorithms have severe problems associated with them: they converge extremely slowly on problems where the objective function is very flat, and they often diverge when the objective function is steep. The author has developed a stochastic optimization algorithm that...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Andradottir, S.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Classical stochastic optimization algorithms have severe problems associated with them: they converge extremely slowly on problems where the objective function is very flat, and they often diverge when the objective function is steep. The author has developed a stochastic optimization algorithm that is more robust than the older algorithms in that it is guaranteed to converge on a larger class of problems. This algorithm is guaranteed to converge even when the iterates are not assumed a priori to be bounded. This algorithm is also observed to converge faster on a significant class of problems. As the parameters can be chosen so that the new algorithm behaves very much like the older algorithms (except that it converges on a larger class of problems), this algorithm should always be used in preference to the older algorithms.< >
DOI:10.1109/WSC.1990.129542