Stochastic Gradient Descent with Pre-Conditioned Polyak Step-Size

Stochastic Gradient Descent (SGD) is one of the many iterative optimization methods that are widely used in solving machine learning problems. These methods display valuable properties and attract researchers and industrial machine learning engineers with their simplicity. However, one of the weakne...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Z̆urnal vyc̆islitelʹnoj matematiki i matematic̆eskoj fiziki 2024-11, Vol.64 (4), p.575-586
Hauptverfasser: Abdukhakimov, F., Xiang, Ch, Kamzolov, D., Takáč, M.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Stochastic Gradient Descent (SGD) is one of the many iterative optimization methods that are widely used in solving machine learning problems. These methods display valuable properties and attract researchers and industrial machine learning engineers with their simplicity. However, one of the weaknesses of this type of methods is the necessity to tune learning rate (step-size) for every loss function and dataset combination to solve an optimization problem and get an efficient performance in a given time budget. Stochastic Gradient Descent with Polyak Step-size (SPS) is a method that offers an update rule that alleviates the need of fine-tuning the learning rate of an optimizer. In this paper, we propose an extension of SPS that employs preconditioning techniques, such as Hutchinson’s method, Adam, and AdaGrad, to improve its performance on badly scaled and/or ill-conditioned datasets.
ISSN:0044-4669
DOI:10.31857/S0044466924040016