Mesh-Based DGCNN: Semantic Segmentation of Textured 3-D Urban Scenes
Textured 3-D mesh is one of the final user products in photogrammetry and remote sensing. However, research on the semantic segmentation of complex urban scenes represented by textured 3-D meshes is in its infancy. We present a mesh-based dynamic graph convolutional neural network (DGCNN) for the se...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on geoscience and remote sensing 2023, Vol.61, p.1-12 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Textured 3-D mesh is one of the final user products in photogrammetry and remote sensing. However, research on the semantic segmentation of complex urban scenes represented by textured 3-D meshes is in its infancy. We present a mesh-based dynamic graph convolutional neural network (DGCNN) for the semantic segmentation of textured 3-D meshes. To represent each mesh facet, composite input feature vectors are constructed by concatenating the face-inherent features, i.e., [Formula Omitted] coordinates of the center of gravity (CoG), texture values, and normal vectors (NVs). A texture fusion module is embedded into the proposed mesh-based DGCNN to generate high-level semantic features of the high-resolution texture information, which is useful for semantic segmentation. We achieve competitive accuracies when the proposed method is applied to the SUM mesh datasets. The overall accuracy (OA), Kappa coefficient (Kap), mean precision (mP), mean recall (mR), mean F1 score (mF1), and mean intersection over union (mIoU) are 93.3%, 88.7%, 79.6%, 83.0%, 80.7%, and 69.6%, respectively. In particular, the OA, mean class accuracy (mAcc), mIoU, and mF1 increase by 0.3%, 12.4%, 3.4%, and 6.9%, respectively, compared with the state-of-the-art method. |
---|---|
ISSN: | 0196-2892 1558-0644 |
DOI: | 10.1109/TGRS.2023.3266273 |