Adaptive Stochastic Conjugate Gradient Optimization for Backpropagation Neural Networks

Backpropagation neural networks are commonly utilized to solve complicated issues in various disciplines. However, optimizing their settings remains a significant task. Traditional gradient-based optimization methods, such as stochastic gradient descent (SGD), often exhibit slow convergence and hype...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024-01, Vol.12, p.1-1
Hauptverfasser: Hashem, Ibrahim Abaker, Alaba, Fadele Ayotunde, Jumare, Muhammad Haruna, Ibrahim, Ashraf Osman, Abulfaraj, Anas W.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Backpropagation neural networks are commonly utilized to solve complicated issues in various disciplines. However, optimizing their settings remains a significant task. Traditional gradient-based optimization methods, such as stochastic gradient descent (SGD), often exhibit slow convergence and hyperparameter sensitivity. An adaptive stochastic conjugate gradient (ASCG) optimization strategy for backpropagation neural networks is proposed in this research. ASCG combines the advantages of stochastic optimization and conjugate gradient techniques to increase training efficiency and convergence speed. Based on the observed gradients, the algorithm adaptively calculates the learning rate and search direction at each iteration, allowing for quicker convergence and greater generalization. Experimental findings on benchmark datasets show that ASCG optimization outperforms standard optimization techniques regarding convergence time and model performance. The proposed ASCG algorithm provides a viable method for improving the training process of backpropagation neural networks, making them more successful in tackling complicated problems across several domains. As a result, the information for initial seeds formed while the model is being trained grows. The coordinated efforts of ASCG's Conjugate Gradient and ASCG components improve learning and achieve global minima. Our results indicate that our ASCG algorithm achieves 21 percent higher accuracy on the HMT dataset and performs better than existing methods on other datasets(DIR-Lab dataset). The experimentation revealed that the conjugate gradient has an efficiency of 95 percent when utilizing the principal component analysis features, compared to 94 percent when using the correlation heatmap features selection approach with MSE of 0.0678.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3370859