Application of an improved U-Net with image-to-image translation and transfer learning in peach orchard segmentation
•The semantic segmentation model U-Net is improved for peach orchard segmentation.•CycleGAN and transfer learning improve the accuracy of peach orchard segmentation.•Coupling UAV data with satellite images enables large-scale mapping of peach orchard. Peach cultivation holds a significant economic i...
Gespeichert in:
Veröffentlicht in: | International journal of applied earth observation and geoinformation 2024-06, Vol.130, p.103871, Article 103871 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •The semantic segmentation model U-Net is improved for peach orchard segmentation.•CycleGAN and transfer learning improve the accuracy of peach orchard segmentation.•Coupling UAV data with satellite images enables large-scale mapping of peach orchard.
Peach cultivation holds a significant economic importance, and obtaining the spatial distribution of peach orchards is helpful for yield prediction and precision agriculture. In this study, we introduce a new U-Net semantic segmentation model, utilizing ResNet50 as a backbone network, augmented with an Efficient Multi-Scale Attention (EMA) mechanism module and a LayerScale adaptive scaling parameter. To address style differences between images from Unmanned Aerial Vehicle (UAV), Google Earth, and Sentinel-2 satellite, we incorporate Cycle-Consistent Generative Adversarial Networks (CycleGAN). This synthesis ensures that UAV images conform to a comparable style found in Google Earth and Sentinel-2 images, while feature details of high spatial resolution UAV images are transferred to Google Earth and Sentinel-2 images through transfer learning. The results demonstrate that using ResNet50 as a backbone network for the U-Net model yields higher accuracy compared to using VGG16 for the U-Net model. Specifically, the Mean Intersection over Union (MIoU) values for UAV and Sentinel-2 images are higher by 0.49 % and 0.95 %, respectively. The MIoU values for UAV, Google Earth, and Sentinel-2 images increased by 0.87 %, 1.71 %, and 1.74 %, respectively, with the introduction of EMA. Additionally, with the introduction of LayerScale adaptive scaling parameters, the MIoU values increased by 0.31 %, 0.33 %, and 1.44 %, respectively, further enhancing the segmentation accuracy of the model. After applying CycleGAN and transfer learning, the MIoU increased by 1.02 %, 0.15 %, and 1.57 % for UAV, Google Earth, and Sentinel-2 images, respectively, resulting in MIoU values of 97.39 %, 92.08 %, and 84.54 %. The comparative analysis with DeepLabV3+, PSPNet, and HRNet models demonstrates the superior mapping performance of the proposed method. Moreover, the method exhibits good generalization and mapping speed across six test sites in the research area. Overall, this approach ensures high precision and efficiency in peach orchard mapping, accommodating various spatial resolutions, and holds potential for addressing diverse requirements in peach orchard mapping applications. |
---|---|
ISSN: | 1569-8432 |
DOI: | 10.1016/j.jag.2024.103871 |