QQLMPA: A quasi-opposition learning and Q-learning based marine predators algorithm

Many engineering and scientific problems in the real-world boil down to optimization problems, which are difficult to solve by using traditional methods. Meta-heuristics are appealing algorithms for solving optimization problems while keeping computational costs reasonable. The marine predators algo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2023-03, Vol.213, p.119246, Article 119246
Hauptverfasser: Zhao, Shangrui, Wu, Yulu, Tan, Shuang, Wu, Jinran, Cui, Zhesen, Wang, You-Gan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Many engineering and scientific problems in the real-world boil down to optimization problems, which are difficult to solve by using traditional methods. Meta-heuristics are appealing algorithms for solving optimization problems while keeping computational costs reasonable. The marine predators algorithm (MPA) is a modern optimization meta-heuristic, inspired by widespread Lévy and Brownian foraging strategies in ocean predators as well as optimal encounter rate strategies in biological interactions between predator and prey. However, MPA is not without its shortcomings. In this paper, a quasi-opposition based learning and Q-learning based marine predators algorithm (QQLMPA) is proposed. This offers multiple improvements over standard MPA. Primely, Q-learning allows MPA to fully use the information generated by previous iterations. And also, quasi-opposition based learning serves to increase population diversity, reducing the risk of convergence to inferior local optima. Numerical experiments demonstrate better performance by QQLMPA on 32 benchmark optimization functions and three engineering problems: designs of pressure vessel, hydro-static thrust bearing, and speed reducer. •Q-learning is introduced to MPA for adaptively choosing search mode.•Quasi-opposition learning is used to enhance the exploration of MPA.•The proposed optimizer can achieve great performance for continuous optimization.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2022.119246