A Cognitively Inspired Hybridization of Artificial Bee Colony and Dragonfly Algorithms for Training Multi-layer Perceptrons
The objective of this article is twofold. On the one hand, we introduce a cognitively inspired hybridization metaheuristic that combines the strengths of two existing metaheuristics: the artificial bee colony (ABC) algorithm and the dragonfly algorithm (DA). The aim of this hybridization is to reduc...
Gespeichert in:
Veröffentlicht in: | Cognitive computation 2018-12, Vol.10 (6), p.1096-1134 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The objective of this article is twofold. On the one hand, we introduce a cognitively inspired hybridization metaheuristic that combines the strengths of two existing metaheuristics: the artificial bee colony (ABC) algorithm and the dragonfly algorithm (DA). The aim of this hybridization is to reduce the problems of slow convergence and trapping into local optima, by striking a good balance between global and local search components of the constituent algorithms. On the other hand, we use the proposed metaheuristic to train a multi-layer perceptron (MLP) as an alternative to existing traditional- and metaheuristic-based learning algorithms; this is for the purpose of improving overall accuracy by optimizing the set of MLP weights and biases. The proposed hybrid ABC/DA (HAD) algorithm comprises three main components: the static and dynamic swarming behavior phase in DA and two global search phases in ABC. The first one performs global search (DA phase), the second one performs local search (onlooker phase), and the third component implements global search (modified scout bee phase). The resultant metaheuristic optimizer is employed to train an MLP to reach a set of weights and biases that can yield high performance compared to traditional learning algorithms or even other metaheuristic optimizers. The proposed algorithm was first evaluated using 33 benchmark functions to test its performance in numerical optimization problems. Later, using HAD for training MLPs was evaluated against six standard classification datasets. In both cases, the performance of HAD was compared with the performance of several new and old metaheuristic methods from swarm intelligence and evolutionary computing. Experimental results show that HAD algorithm is clearly superior to the standard ABC and DA algorithms, as well as to other well-known algorithms, in terms of achieving the best optimal value, convergence speed, avoiding local minima and accuracy of trained MLPs. The proposed algorithm is a promising metaheuristic technique for general numerical optimization and for training MLPs. Specific applications and use cases are yet to be explored fully but they are supported by the encouraging results in this study. |
---|---|
ISSN: | 1866-9956 1866-9964 |
DOI: | 10.1007/s12559-018-9588-3 |