Prior knowledge guided differential evolution
Differential evolution (DE) has become one of the most popular paradigms of evolutionary algorithms. Over the past two decades, some prior knowledge has been obtained in the DE research community. It is an interesting topic to enhance the performance of DE by taking advantage of such prior knowledge...
Gespeichert in:
Veröffentlicht in: | Soft computing (Berlin, Germany) Germany), 2017-11, Vol.21 (22), p.6841-6858 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Differential evolution (DE) has become one of the most popular paradigms of evolutionary algorithms. Over the past two decades, some prior knowledge has been obtained in the DE research community. It is an interesting topic to enhance the performance of DE by taking advantage of such prior knowledge. Along this line, a prior knowledge guided DE (called PKDE) is proposed in this paper. In PKDE, we extract two levels of prior knowledge, i.e., the macro- and micro-levels. In order to integrate these two levels of prior knowledge in an effective way, the control parameters of PKDE are tuned based on two distributions (i.e., Cauchy and normal distribution), with the aim of alleviating the premature convergence at the early stage and speeding up the convergence toward the global optimal solution at the later stage. In addition, the self-adaptive mutation strategy is implemented based on our previous study. PKDE is compared with eight DE variants and seven non-DE algorithms on two sets of benchmark test functions from IEEE CEC2005 and IEEE CEC2014. Systematic experiments demonstrate that the overall performance of PKDE is very competitive. |
---|---|
ISSN: | 1432-7643 1433-7479 |
DOI: | 10.1007/s00500-016-2235-6 |