Neural Network Trained by Biogeography-Based Optimizer with Chaos for Sonar Data Set Classification

Multi-layer Perceptron Neural Networks (MLP NNs) are one of the most popular NNs in classification of the actual objectives. “Training” is the most important developmental section of these types of networks which has gained a lot of attention in the recent years. Using the gradient descent and recur...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Wireless personal communications 2017-08, Vol.95 (4), p.4623-4642
Hauptverfasser: Mosavi, M. R., Khishe, M., Akbarisani, M.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multi-layer Perceptron Neural Networks (MLP NNs) are one of the most popular NNs in classification of the actual objectives. “Training” is the most important developmental section of these types of networks which has gained a lot of attention in the recent years. Using the gradient descent and recursive methods have been common for the purposes of training the MLP networks from a long time ago. Improper classification, being stuck in the local minimums and low convergence speed are amongst the drawbacks of the traditional methods. Using the heuristic and meta-heuristic algorithms became very popular in the recent year for the purposes of overcoming these drawbacks. This paper uses a method named “biogeography-based optimizer (BBO) with Chaos (CBBO)” to train the MLP NNs. This method presents greater discovery capabilities in comparison with the heuristic methods with regard to the immigration and emigration operators and also separate mutations for each individual. This algorithm will be compared with the ant colony optimization, particle swarm optimization, genetics algorithm, differential evolution and also the classic BBO through four data sets in order to test the presented method. The measured metrics include the convergence speed, the probability of getting stuck in local minimums, and classification accuracy. The results indicate that the new algorithm presents better or comparable results in all cases in comparison with the mentioned algorithms.
ISSN:0929-6212
1572-834X
DOI:10.1007/s11277-017-4110-x