Two-Branch Deeper Graph Convolutional Network for Hyperspectral Image Classification
Graph convolutional network (GCN) has recently attracted great attention in hyperspectral image (HSI) classification due to its strong ability to aggregate information of neighborhood nodes. However, a GCN model usually suffers from the over-smoothing problem (i.e., all nodes' representations c...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2023-01, Vol.61, p.1-1 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph convolutional network (GCN) has recently attracted great attention in hyperspectral image (HSI) classification due to its strong ability to aggregate information of neighborhood nodes. However, a GCN model usually suffers from the over-smoothing problem (i.e., all nodes' representations converge to a stationary point) when the number of GCN layers is increased. In addition, GCNs always work on superpixel-level nodes to reduce computational cost, so pixel-level features cannot be well captured. To deal with these problems, a novel two-branch deeper GCN (TBDGCN) is proposed to combine the advantages of superpixel-based GCN and pixel-based CNN, which can simultaneously extract superpixel-level and pixel-level features of HSIs. In the GCN branch, a GCN module with the DropEdge technique and residual connection is designed to alleviate over-smoothing and over-fitting problem, which results in a deeper network structure with more than ten layers. In the CNN branch, to capture spatial positional information and channel information, a mixed attention mechanism is constructed to extract attention-based spectral-spatial features. The features of the GCN and CNN branches are then fused for classification. Experimental results on three benchmark HSI data sets show that the classification performance of our TBDGCN is better than existing GCN models especially in the case of small sample size. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2023.3257369 |