Computationally Lightweight Hyperspectral Image Classification Using a Multiscale Depthwise Convolutional Network with Channel Attention

Convolutional networks have been widely used for the classification of hyperspectral images; however, such networks are notorious for their large number of trainable parameters and high computational complexity. Additionally, traditional convolution-based methods are typically implemented as a simpl...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE geoscience and remote sensing letters 2023-01, Vol.20, p.1-1
Hauptverfasser: Ye, Zhen, Li, Cuiling, Liu, Qingxin, Bai, Lin, Fowler, James E.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Convolutional networks have been widely used for the classification of hyperspectral images; however, such networks are notorious for their large number of trainable parameters and high computational complexity. Additionally, traditional convolution-based methods are typically implemented as a simple cascade of a number of convolutions using a single-scale convolution kernel. In contrast, a lightweight multiscale convolutional network is proposed, capitalizing on feature extraction at multiple scales in parallel branches followed by feature fusion. In this approach, 2D depthwise convolution is used instead of conventional convolution in order to reduce network complexity without sacrificing classification accuracy. Furthermore, multiscale channel attention is also employed to selectively exploit discriminative capability across various channels. To do so, multiple 1D convolutions with varying kernel sizes provide channel attention at multiple scales, again with the goal of minimizing network complexity. Experimental results reveal that the proposed network not only outperforms other competing lightweight classifiers in terms of classification accuracy but also exhibits a lower number of parameters as well as significantly less computational cost.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2023.3285208