Out-of-the-box parameter control for evolutionary and swarm-based algorithms with distributed reinforcement learning
Parameter control methods for metaheuristics with reinforcement learning put forward so far usually present the following shortcomings: (1) Their training processes are usually highly time-consuming and they are not able to benefit from parallel or distributed platforms; (2) they are usually sensiti...
Gespeichert in:
Veröffentlicht in: | Swarm intelligence 2023-09, Vol.17 (3), p.173-217 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Parameter control methods for metaheuristics with reinforcement learning put forward so far usually present the following shortcomings: (1) Their training processes are usually highly time-consuming and they are not able to benefit from parallel or distributed platforms; (2) they are usually sensitive to their hyperparameters, which means that the quality of the final results is heavily dependent on their values; (3) and limited benchmarks have been used to assess their generality. This paper addresses these issues by proposing a methodology for training out-of-the-box parameter control policies for mono-objective non-niching evolutionary and swarm-based algorithms using distributed reinforcement learning with population-based training. The proposed methodology is suitable to be used in any mono-objective optimization problem and for any mono-objective and non-niching Evolutionary and swarm-based algorithm. The results in this paper achieved through extensive experiments show that the proposed method satisfactorily improves all the aforementioned issues, overcoming constant, random and human-designed policies in several different scenarios. |
---|---|
ISSN: | 1935-3812 1935-3820 |
DOI: | 10.1007/s11721-022-00222-z |