Almost sure convergence of randomised‐difference descent algorithm for stochastic convex optimisation

Stochastic gradient descent algorithm is a classical and useful method for stochastic optimisation. While stochastic gradient descent has been theoretically investigated for decades and successfully applied in machine learning such as training of deep neural networks, it essentially relies on obtain...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET Control Theory and Applications 2021-11, Vol.15 (17), p.2183-2194
Hauptverfasser: Geng, Xiaoxue, Huang, Gao, Zhao, Wenxiao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Stochastic gradient descent algorithm is a classical and useful method for stochastic optimisation. While stochastic gradient descent has been theoretically investigated for decades and successfully applied in machine learning such as training of deep neural networks, it essentially relies on obtaining the unbiased estimates of gradients/subgradients of the objective functions. In this paper, by constructing the randomised differences of the objective function, a gradient‐free algorithm, named the stochastic randomised‐difference descent algorithm, is proposed for stochastic convex optimisation. Under the strongly convex assumption of the objective function, it is proved that the estimates generated from stochastic randomised‐difference descent converge to the optimal value with probability one, and the convergence rates of both the mean square error of estimates and the regret functions are established. Finally, some numerical examples are prsented.
ISSN:1751-8644
1751-8652
DOI:10.1049/cth2.12184