UFS-LSTM: unsupervised feature selection with long short-term memory network for remote sensing scene classification

The aim of this research is to perform remote sensing scene classification, because it supports numerous strategic research fields like land use and land cover monitoring. However, classifying an enormous amount of remote sensing data is a challenging task in scene classification. In this research w...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Evolutionary intelligence 2023-02, Vol.16 (1), p.299-315
Hauptverfasser: Kumar, Sandeep, Setty, Suresh Lakshmi Narasimha
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The aim of this research is to perform remote sensing scene classification, because it supports numerous strategic research fields like land use and land cover monitoring. However, classifying an enormous amount of remote sensing data is a challenging task in scene classification. In this research work, a new model is introduced to improve the feature extraction ability for better scene classification. A multiscale Retinex technique is employed for color restoration, and contrast enhancement in the aerial images that are collected from UC Merced, aerial image dataset, and RESISC45. Further, the feature extraction is carried out using steerable pyramid transform, gray level co-occurrence matrix features, and local ternary pattern. The feature extraction mechanism reduces overfitting risks, improves training process, and data visualization ability. Generally, the extracted features are high dimension, so an unsupervised feature selection based on multi subspace randomization and collaboration with state transition algorithm is proposed for selecting active features for better multiclass classification. The selected features are fed to long short term memory network for scene type classification. The experimental results showed that the proposed model achieved 99.14 %, 98.09%, and 99.25% of overall classification accuracy on UC Merced, RESISC45 and aerial image dataset. The proposed model showed a minimum of 0.03 % and maximum of 18.6 % improvement in classification accuracy compared to the existing models like self-attention based deep feature fusion, multitask learning system with convolutional neural network, multilayer feature fusion Wasserstein generative adversarial networks, and transfer learning model on UC Merced, RESISC45 and aerial dataset, respectively.
ISSN:1864-5909
1864-5917
DOI:10.1007/s12065-021-00660-4