AdaDerivative optimizer: Adapting step-sizes by the derivative term in past gradient information
AdaBelief fully utilizes “belief” to iteratively update the parameters of deep neural networks. However, the reliability of the “belief” is determined by the gradient’s prediction accuracy, and the key to this prediction accuracy is the selection of the smoothing parameter β1. AdaBelief also suffers...
Gespeichert in:
Veröffentlicht in: | Engineering applications of artificial intelligence 2023-03, Vol.119, p.105755, Article 105755 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | AdaBelief fully utilizes “belief” to iteratively update the parameters of deep neural networks. However, the reliability of the “belief” is determined by the gradient’s prediction accuracy, and the key to this prediction accuracy is the selection of the smoothing parameter β1. AdaBelief also suffers from the overshoot problem, which occurs when the value of parameters exceeds the value of the target and cannot be changed along the gradient direction. In this paper, we propose AdaDerivative to eliminate the overshoot problem of AdaBelief. The key to AdaDerivative is that the “belief” of AdaBelief is replaced by the derivative term’s exponential moving average (EMA), which can be constructed as (1−β2)∑i=1tβ2t−i(gi−gi−1)2 based on the past and current gradients. We validate the performance of AdaDerivative on a variety of tasks, including image classification, language modeling, node classification, image generation, and object detection tasks. Extensive experimental results demonstrate that AdaDerivative can achieve state-of-the-art performance. |
---|---|
ISSN: | 0952-1976 1873-6769 |
DOI: | 10.1016/j.engappai.2022.105755 |