An Integrated Parallel Inner Deep Learning Models Information Fusion With Bayesian Optimization for Land Scene Classification in Satellite Images
Classification of remote scenes in satellite imagery has many applications, such as surveillance, earth observation, etc. Classifying high-resolution remote sensing images in machine learning is a big challenge nowadays. Several automated techniques based on machine learning and deep learning have b...
Gespeichert in:
Veröffentlicht in: | IEEE journal of selected topics in applied earth observations and remote sensing 2023, Vol.16, p.9888-9903 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Classification of remote scenes in satellite imagery has many applications, such as surveillance, earth observation, etc. Classifying high-resolution remote sensing images in machine learning is a big challenge nowadays. Several automated techniques based on machine learning and deep learning have been introduced in the literature; however, these techniques fail to perform for complex texture images, complex backgrounds, and small objects. In this work, we proposed a new automated technique based on the inner fusion of two deep learning models and feature selection. A new network is designed at the initial phase based on the inner-level fusion of two networks and combined weights. After that, hyperparameters have been initialized based on the Bayesian optimization (BO). Usually, the hyperparameters have been initialized through a manual approach, but that is not an efficient way of selection. After that, the designed model is trained and extracted deep features from the deeper layer. In the last step, a poor-rich controlled entropy-based feature selection technique is developed for the best feature selection. The selected features are finally classified using machine learning classifiers. We performed the experimental process of the proposed architecture on three publically available datasets: Aerial image dataset (AID), UC-Merceds, and WHU-RS19. On these datasets, we obtained the accuracy of 96.3%, 95.6%, and 97.8%, respectively. Comparison is conducted with state-of-the-art techniques and shows improved accuracy. |
---|---|
ISSN: | 1939-1404 2151-1535 |
DOI: | 10.1109/JSTARS.2023.3324494 |