Evaluation of learning rate training model on heart disease detection using LSTM

Various algorithms have been used to adjust learning rate parameters, but such strategies generally fail to concentrate on improving the resulting accuracy. Most experts in neural networks use the highest learning rate that allows fusion. The adjustment was made to the weights, and used different ad...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Faruq, Amrul, Adeyani, Bellina Rahmamaulida, Syafaah, Lailis
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Various algorithms have been used to adjust learning rate parameters, but such strategies generally fail to concentrate on improving the resulting accuracy. Most experts in neural networks use the highest learning rate that allows fusion. The adjustment was made to the weights, and used different adjustment functions to avoid the impact of improper parameter adjustment. In this research, two types of optimizers are used, namely SGD Opt and Adam Opt. In conducting the training and testing process, different learning rate weights are given, namely 0.01, 0.05, and 0.09, with the Adam optimizer, and the use of the default learning rate, namely the SGD optimizer with the learning rate weight obtained automatically 0.000000018. In the conducted experiments, the SGD optimizer with ReduceLRonPlateu gets an average accuracy value of 81% compared to the Adam optimizer, which only gets the highest value of 71% when the learning rate is 0.05, determined manually. It can be concluded that determining the weight value of the learning rate is risky because if the weight of the learning rate is small, then the network takes a relatively long time to occupy or reach a convergence state, even though a small weight will guarantee that the training or testing process will not pass the minimum determination value (0). In contrast, for large weight, the percentage for loss fluctuations when training will be relatively large, so it is difficult to reach a convergence state.
ISSN:0094-243X
1551-7616
DOI:10.1063/5.0192603