Classification of single-view object point clouds

•We approach object point cloud classification from a more practical perspective, and propose the single-view, partial setting under which point clouds covering the partial surface of object instances are observed.•We discuss the limitations of existing methods, and show that their performance drops...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition 2023-03, Vol.135, p.109137, Article 109137
Hauptverfasser: Xu, Zelin, Liu, Kangjun, Chen, Ke, Ding, Changxing, Wang, Yaowei, Jia, Kui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•We approach object point cloud classification from a more practical perspective, and propose the single-view, partial setting under which point clouds covering the partial surface of object instances are observed.•We discuss the limitations of existing methods, and show that their performance drops drastically under the practical setting.•We propose a baseline method of Pose-Accompanied Point cloud classification Network (PAPNet), which accompanies the classification task with an auxiliary one of supervised object pose learning.•To advance the research field, we adapt existing ModelNet40 and ScanNet benchmarks to the single-view, partial setting. Object point cloud classification has drawn great research attention since the release of benchmarking datasets, such as the ModelNet and the ShapeNet. These benchmarks assume point clouds covering complete surfaces of object instances, for which plenty of high-performing methods have been developed. However, their settings deviate from those often met in practice, where, due to (self-)occlusion, a point cloud covering partial surface of an object is captured from an arbitrary view. We show in this paper that performance of existing point cloud classifiers drops drastically under the considered single-view, partial setting; the phenomenon is consistent with the observation that semantic category of a partial object surface is less ambiguous only when its distribution on the whole surface is clearly specified. To this end, we argue for a single-view, partial setting where supervised learning of object pose estimation should be accompanied with classification. Technically, we propose a baseline method of Pose-Accompanied Point cloud classification Network (PAPNet); built upon SE(3)-equivariant convolutions, the PAPNet learns intermediate pose transformations for equivariant features defined on vector fields, which makes the subsequent classification easier (ideally) in the category-level, canonical pose. By adapting existing ModelNet40 and ScanNet datasets to the single-view, partial setting, experiment results can verify the necessity of object pose estimation and superiority of our PAPNet to existing classifiers.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.109137