Adaptive dynamic self-learning grey wolf optimization algorithm for solving global optimization problems and engineering problems

The grey wolf optimization algorithm (GWO) is a new metaheuristic algorithm. The GWO has the advantages of simple structure, few parameters to adjust, and high efficiency, and has been applied in various optimization problems. However, the orginal GWO search process is guided entirely by the best th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Mathematical biosciences and engineering : MBE 2024-02, Vol.21 (3), p.3910-3943
Hauptverfasser: Zhang, Yijie, Cai, Yuhang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The grey wolf optimization algorithm (GWO) is a new metaheuristic algorithm. The GWO has the advantages of simple structure, few parameters to adjust, and high efficiency, and has been applied in various optimization problems. However, the orginal GWO search process is guided entirely by the best three wolves, resulting in low population diversity, susceptibility to local optima, slow convergence rate, and imbalance in development and exploration. In order to address these shortcomings, this paper proposes an adaptive dynamic self-learning grey wolf optimization algorithm (ASGWO). First, the convergence factor was segmented and nonlinearized to balance the global search and local search of the algorithm and improve the convergence rate. Second, the wolves in the original GWO approach the leader in a straight line, which is too simple and ignores a lot of information on the path. Therefore, a dynamic logarithmic spiral that nonlinearly decreases with the number of iterations was introduced to expand the search range of the algorithm in the early stage and enhance local development in the later stage. Then, the fixed step size in the original GWO can lead to algorithm oscillations and an inability to escape local optima. A dynamic self-learning step size was designed to help the algorithm escape from local optima and prevent oscillations by reasonably learning the current evolution success rate and iteration count. Finally, the original GWO has low population diversity, which makes the algorithm highly susceptible to becoming trapped in local optima. A novel position update strategy was proposed, using the global optimum and randomly generated positions as learning samples, and dynamically controlling the influence of learning samples to increase population diversity and avoid premature convergence of the algorithm. Through comparison with traditional algorithms, such as GWO, PSO, WOA, and the new variant algorithms EOGWO and SOGWO on 23 classical test functions, ASGWO can effectively improve the convergence accuracy and convergence speed, and has a strong ability to escape from local optima. In addition, ASGWO also has good performance in engineering problems (gear train problem, ressure vessel problem, car crashworthiness problem) and feature selection.
ISSN:1551-0018
1551-0018
DOI:10.3934/mbe.2024174