Rethinking local-to-global representation learning for rotation-invariant point cloud analysis
Point cloud analysis has drawn much attention in recent years, whereas most existing point-based deep networks ignore the rotation-invariant property of the encoded features, which leads to poor performance given 3D shapes with arbitrary rotation. In this paper, we propose a novel rotation-invariant...
Gespeichert in:
Veröffentlicht in: | Pattern recognition 2024-10, Vol.154, p.110624, Article 110624 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Point cloud analysis has drawn much attention in recent years, whereas most existing point-based deep networks ignore the rotation-invariant property of the encoded features, which leads to poor performance given 3D shapes with arbitrary rotation. In this paper, we propose a novel rotation-invariant method that embeds both distinctive local and global rotation-invariant information. Specifically, we design a two-branch network that separately extracts purely local and global rotation-invariant features. In the global branch, we leverage canonical transformation to extract global representations, while in the local branch, we utilize hand-crafted geometric features (e.g., relative distances and angles) to embed local representations. To fuse the features from distinct branches, we introduce an attention-based fusion module to adaptively integrate the local-to-global representation by considering the geometry contexts of each point. Particularly, different from existing rotation-invariant works, we further introduce a self-attention unit into the global branch for embedding non-local information and also insert multiple fusion modules into the local branch to emphasize the global features. Extensive experiments on standard benchmarks show that our method achieves consistent and competitive performance on various downstream tasks, and also the best performance on the shape classification task on the ModelNet40 dataset with a 0.8% accuracy gain, compared to state-of-the-art methods. The code and pre-trained models are available at https://github.com/CentauriStar/Rotation-Invariant-Point-Cloud-Analysis.
•The use of fused feature is explored to represent rotation-invariance for point cloud.•Distinctive local and global information is exploited and adaptively fused.•Purely global features are extracted within the entire space of point cloud.•Experimental results are boosted by the deep fusion of local and global features.•Classification accuracy gains 0.8% compared to the state-of-the-art. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2024.110624 |