Mangrove semantic segmentation on aerial images

In the Yucatan Peninsula, there is a rich diversity of mangroves, notably including Rhizophora mangle, Avicennia germinans, and Laguncularia racemosa. These mangroves contribute to the recovery of degraded natural areas caused by human activities. Additionally, they serve as natural habitats for var...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Revista IEEE América Latina 2024-05, Vol.22 (5), p.379-386
Hauptverfasser: Arias-Aguilar, Jose Anibal, Lopez-Jimenez, Efren, Ramirez-Cardenas, Oscar D., Herrera-Lozada, J. Carlos, Hevia-Montiel, Nidiyare
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the Yucatan Peninsula, there is a rich diversity of mangroves, notably including Rhizophora mangle, Avicennia germinans, and Laguncularia racemosa. These mangroves contribute to the recovery of degraded natural areas caused by human activities. Additionally, they serve as natural habitats for various animal and plant species. Studies have highlighted the significance of preserving and restoring these species through traditional methods. More recently, the integration of remote sensing and deep learning techniques has allowed for the automated detection and quantification of mangroves. In this study, we explore the application of deep neural network techniques to address computer vision challenges in the field of remote sensing. Specifically, we focus on the detection and quantification of mangroves in remote image sensing, employing transfer learning and fine-tuning with three distinct deep neural network architectures: SegNet-VGG16, U-Net, and Fully Convolutional Network (R-FCN), with the latter two based on the ResNet network. To evaluate the performance of each architecture, we applied key evaluation metrics, including Intersection over Union (IoU), Dice Coefficient, Precision, Sensitivity, and Accuracy. Our results indicate that SegNet-VGG16 exhibited the highest levels of Precision (98.03%) and Accuracy (97.03%), while U-Net outperformed in terms of IoU(96.97%), Dice Coefficient (92.20%), and Sensitivity (96.81%).
ISSN:1548-0992
1548-0992
DOI:10.1109/TLA.2024.10500718