Semantic Segmentation of High-Resolution Remote Sensing Images with Improved U-Net Based on Transfer Learning

Semantic segmentation of high-resolution remote sensing images has emerged as one of the foci of research in the remote sensing field, which can accurately identify objects on the ground and determine their localization. In contrast, the traditional deep learning-based semantic segmentation, on the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computational intelligence systems 2023-11, Vol.16 (1), p.1-11, Article 181
Hauptverfasser: Zhang, Hua, Jiang, Zhengang, Zheng, Guoxun, Yao, Xuekun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Semantic segmentation of high-resolution remote sensing images has emerged as one of the foci of research in the remote sensing field, which can accurately identify objects on the ground and determine their localization. In contrast, the traditional deep learning-based semantic segmentation, on the other hand, requires a large amount of annotated data, which is unsuitable for high-resolution remote sensing tasks with limited resources. It is therefore important to build a semantic segmentation method for high-resolution remote sensing images. In this paper, it is proposed an improved U-Net model based on transfer learning to solve the semantic segmentation problem of high-resolution remote sensing images. The model is based on the symmetric encoder–decoder structure of U-Net. For the encoder, transfer learning is applied and VGG16 is used as the backbone of the feature extraction network, and in the decoder, after upsampling using bilinear interpolation, it is performed multiscale fusion with the feature maps of the corresponding layers of the encoder in turn and is finally obtained the predicted value of each pixel to achieve precise localization. To verify the efficacy of the proposed network, experiments are performed on the ISPRS Vaihingen dataset. The experiments show that the applied method has achieved high-quality semantic segmentation results on the high-resolution remote sensing dataset, and the MIoU is 1.70%, 2.20%, and 2.33% higher on the training, validation, and test sets, respectively, and the IoU is 4.26%, 6.89%, and 5.44% higher for the automotive category compared to the traditional U-Net.
ISSN:1875-6883
1875-6883
DOI:10.1007/s44196-023-00364-w