CCTNet: CNN and Cross-Shaped Transformer Hybrid Network for Remote Sensing Image Semantic Segmentation

Deep learning methods have achieved great success in the field of remote sensing image segmentation in recent years, but building a lightweight segmentation model with comprehensive local and global feature extraction capabilities remains a challenging task. In this article, we propose a convolution...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal of selected topics in applied earth observations and remote sensing 2024, Vol.17, p.19986-19997
Hauptverfasser: Wu, Honglin, Zeng, Zhaobin, Huang, Peng, Yu, Xinyu, Zhang, Min
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep learning methods have achieved great success in the field of remote sensing image segmentation in recent years, but building a lightweight segmentation model with comprehensive local and global feature extraction capabilities remains a challenging task. In this article, we propose a convolutional neural network (CNN) and cross-shaped transformer hybrid network (CCTNet) for semantic segmentation of high-resolution remote sensing images. This model follows an encoder-decoder structure. It employs ResNet18 as an encoder to extract hierarchical feature information, and constructs a transformer decoder based on efficient cross-shaped self-attention to fully model local and global feature information and achieve lightweighting of the network. Moreover, the transformer block introduces a mixed-scale convolutional feedforward network to further enhance multiscale information extraction. Furthermore, a simplified and efficient feature aggregation module is leveraged to gradually aggregate local and global information at different stages. Extensive comparison experiments on the ISPRS Vaihingen and Potsdam datasets reveal that our method obtains superior performance compared with state-of-the-art lightweight methods.
ISSN:1939-1404
2151-1535
DOI:10.1109/JSTARS.2024.3487003