Loss Distillation via Gradient Matching for Point Cloud Completion with Weighted Chamfer Distance
3D point clouds enhanced the robot's ability to perceive the geometrical information of the environments, making it possible for many downstream tasks such as grasp pose detection and scene understanding. The performance of these tasks, though, heavily relies on the quality of data input, as in...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 3D point clouds enhanced the robot's ability to perceive the geometrical
information of the environments, making it possible for many downstream tasks
such as grasp pose detection and scene understanding. The performance of these
tasks, though, heavily relies on the quality of data input, as incomplete can
lead to poor results and failure cases. Recent training loss functions designed
for deep learning-based point cloud completion, such as Chamfer distance (CD)
and its variants (\eg HyperCD ), imply a good gradient weighting scheme can
significantly boost performance. However, these CD-based loss functions usually
require data-related parameter tuning, which can be time-consuming for
data-extensive tasks. To address this issue, we aim to find a family of
weighted training losses ({\em weighted CD}) that requires no parameter tuning.
To this end, we propose a search scheme, {\em Loss Distillation via Gradient
Matching}, to find good candidate loss functions by mimicking the learning
behavior in backpropagation between HyperCD and weighted CD. Once this is done,
we propose a novel bilevel optimization formula to train the backbone network
based on the weighted CD loss. We observe that: (1) with proper weighted
functions, the weighted CD can always achieve similar performance to HyperCD,
and (2) the Landau weighted CD, namely {\em Landau CD}, can outperform HyperCD
for point cloud completion and lead to new state-of-the-art results on several
benchmark datasets. {\it Our demo code is available at
\url{https://github.com/Zhang-VISLab/IROS2024-LossDistillationWeightedCD}.} |
---|---|
DOI: | 10.48550/arxiv.2409.06171 |