To Drop or to Select: Reduce the Negative Effects of Disturbance Features for Point Cloud Classification from an Interpretable Perspective
The perturbation features limit the performance of point cloud classification models both for clean point cloud and point cloud obtained from real-world. In this paper, we propose two methods to enhance models by reducing the negative impact of the nuisance features from the perspective of interpret...
Gespeichert in:
Veröffentlicht in: | IEEE access 2023-01, Vol.11, p.1-1 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The perturbation features limit the performance of point cloud classification models both for clean point cloud and point cloud obtained from real-world. In this paper, we propose two methods to enhance models by reducing the negative impact of the nuisance features from the perspective of interpretability, namely, dropping the nuisance points before inputting the point cloud into models and adaptively selecting the important features during the training process. The former is achieved by saliency analysis of models, the perturbation points are those with low contribution to models. For each sample, dropping the low contribution points based on the saliency scores is equivalent to filtering the perturbation features. We design a generic framework for generating saliency maps for various models and datasets, and obtain empirical values for the number of dropped points on each set of them. Then we apply the unsupervised dropping process to improve the robustness of models. The latter is achieved by adaptive downsampling, and we design a multi-stage learnable class-attention-based downsampling module to replace the commonly used Farthest Point Sampling (FPS). As the training progresses, the downsampling module tends to select the common features for each category, thus eliminating the nuisance features to improve the learning efficiency of the model. For dropping points (DP), we generate saliency maps for PointNet&++, DGCNN and PointMLP on ModelNet40 and ScanObjectNN, PointNet+DP reaches an overall accuracy (OA) of 92.5% and 72% on ModelNet40 and ScanObjectNN, surpassing the original model by 3.4% and 5.3%, the OA of PointNet++ with DP on ModelNet40 object classification can be raised from 91.8% to 93.7%. For adaptive feature selection (AFS), PointMLP-elite+AFS reaches an OA of 92.5% and a mean accuracy (mAcc) of 72% on ScanObjectNN, surpassing the original model by 0.8% and 1%. It reaches the level of PointMLP with 6.3% of the number of parameters. Considering the difficulty of the deployment of deep models, PointMLP-elite+AFS is the most cost-effective classification model known on ScanObjectNN. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2023.3266340 |