A multi-scale residual capsule network for hyperspectral image classification with small training samples

Convolutional Neural Network(CNN) has been widely employed in hyperspectral image(HSI) classification. However, CNN cannot attain the relative location relation of spatial information well, hindering the further improvement of classification performance. Capsule Network(CapsNet) has been presented r...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2023-11, Vol.82 (26), p.40473-40501
Hauptverfasser: Shi, Meilin, Zeng, Xilong, Ren, Jiansi, Shi, Yichang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Convolutional Neural Network(CNN) has been widely employed in hyperspectral image(HSI) classification. However, CNN cannot attain the relative location relation of spatial information well, hindering the further improvement of classification performance. Capsule Network(CapsNet) has been presented recently and represents features by vectors, which enhances the ability to attain feature space information and identify relative positions, and makes up for the shortcomings of CNN. To further improve the classification performance of HSI using CapsNet under limited labeled samples, this article proposes a multi-scale residual capsule network(MR-CapsNet). The proposed method adopts extended multi-scale convolution blocks to fully extract spectral-spatial features. Subsequently, the features extracted by convolution kernels of different sizes are fused by pointwise convolution. The residual structure is used for splicing with the input data, preventing the problem of vanishing gradients and overfitting. Finally, the fused feature information is classified at the capsule layer through the dynamic routing mechanism. Comparative experiments were carried out on three public datasets of hyperspectral images. The experimental results indicate that the overall classification accuracy of the proposed method has a 4.13%, 2.98%, and 1.43% improvement over the recent DC-CapsNet on three datasets, respectively.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-023-15017-5