Learning Rate Dropout

Optimization algorithms are of great importance to efficiently and effectively train a deep neural network. However, the existing optimization algorithms show unsatisfactory convergence behavior, either slowly converging or not seeking to avoid bad local optima. Learning rate dropout (LRD) is a new...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2023-11, Vol.34 (11), p.9029-9039
Hauptverfasser: Lin, Huangxing, Zeng, Weihong, Zhuang, Yihong, Ding, Xinghao, Huang, Yue, Paisley, John
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Optimization algorithms are of great importance to efficiently and effectively train a deep neural network. However, the existing optimization algorithms show unsatisfactory convergence behavior, either slowly converging or not seeking to avoid bad local optima. Learning rate dropout (LRD) is a new gradient descent technique to motivate faster convergence and better generalization. LRD aids the optimizer to actively explore in the parameter space by randomly dropping some learning rates (to 0); at each iteration, only parameters whose learning rate is not 0 are updated. Since LRD reduces the number of parameters to be updated for each iteration, the convergence becomes easier. For parameters that are not updated, their gradients are accumulated (e.g., momentum) by the optimizer for the next update. Accumulating multiple gradients at fixed parameter positions gives the optimizer more energy to escape from the saddle point and bad local optima. Experiments show that LRD is surprisingly effective in accelerating training while preventing overfitting.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2022.3155181