Dual-Graph Attention Convolution Network for 3-D Point Cloud Classification

Three-dimensional point cloud classification is fundamental but still challenging in 3-D vision. Existing graph-based deep learning methods fail to learn both low-level extrinsic and high-level intrinsic features together. These two levels of features are critical to improving classification accurac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-04, Vol.35 (4), p.4813-4825
Hauptverfasser: Huang, Chang-Qin, Jiang, Fan, Huang, Qiong-Hao, Wang, Xi-Zhe, Han, Zhong-Mei, Huang, Wei-Yu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Three-dimensional point cloud classification is fundamental but still challenging in 3-D vision. Existing graph-based deep learning methods fail to learn both low-level extrinsic and high-level intrinsic features together. These two levels of features are critical to improving classification accuracy. To this end, we propose a dual-graph attention convolution network (DGACN). The idea of DGACN is to use two types of graph attention convolution operations with a feedback graph feature fusion mechanism. Specifically, we exploit graph geometric attention convolution to capture low-level extrinsic features in 3-D space. Furthermore, we apply graph embedding attention convolution to learn multiscale low-level extrinsic and high-level intrinsic fused graph features together. Moreover, the points belonging to different parts in real-world 3-D point cloud objects are distinguished, which results in more robust performance for 3-D point cloud classification tasks than other competitive methods, in practice. Our extensive experimental results show that the proposed network achieves state-of-the-art performance on both the synthetic ModelNet40 and real-world ScanObjectNN datasets.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2022.3162301