XcelNet14: A Novel Deep Learning Framework for Aerial Scene Classification
Image classification is critically important to numerous remote sensing applications; however, the growing number of dataset classes, along with their diversity and varying acquisition conditions, poses a significant challenge to its effectiveness. Convolutional Neural Network models serve as a fund...
Gespeichert in:
Veröffentlicht in: | IEEE access 2024, Vol.12, p.196266-196281 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Image classification is critically important to numerous remote sensing applications; however, the growing number of dataset classes, along with their diversity and varying acquisition conditions, poses a significant challenge to its effectiveness. Convolutional Neural Network models serve as a fundamental component in the image classification framework, demonstrating classification accuracy that often surpasses 90% when integrated with various benchmark classifiers. It is recognized that their performance could potentially be enhanced through specific architectural adjustments. This study introduces a comparable model, XcelNet14, developed for remote sensing image classification. The architecture comprises 11 convolutional layers and 3 fully connected layers, incorporating three residual stacks for enhanced performance. The proposed architecture undergoes a comprehensive evaluation using three benchmark remote sensing datasets: WHU-RS-19, UCMerced, and NWPU-RESISC, conducted in two distinct phases: 1) utilizing the complete set of features, and 2) employing a feature set reduced by 50%. Comprehensive simulations indicate that the proposed model achieves an overall classification accuracy ranging from 98% to 99.9%, thereby surpassing the benchmark architectures by up to 5%, all while maintaining lower computational costs. A comprehensive statistical analysis further supports the results obtained. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3519341 |