Learning compact and overlap-biased interactions for point cloud registration
Point cloud registration is a fundamental task in computer vision. Recent Transformer based methods for point cloud registration take advantage of the interaction modeling ability of the attention operation. However, feature ambiguity and low overlap are still the bottleneck in real scenes point clo...
Gespeichert in:
Veröffentlicht in: | Neurocomputing (Amsterdam) 2024-09, Vol.598, p.127949, Article 127949 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Point cloud registration is a fundamental task in computer vision. Recent Transformer based methods for point cloud registration take advantage of the interaction modeling ability of the attention operation. However, feature ambiguity and low overlap are still the bottleneck in real scenes point cloud registration. In this paper, we present a new neural network to solve these two problems in Transformer architecture. First, we propose an Optimal Transport guided Cross Attention (OT-CA) to build compact interactions in CA which can mitigating the feature ambiguity problem. It uses a Spatial Consistency guided cost Regularization (SCR) to build the cost for the optimal transport problem, and get the weight matrix of CA by solving it. The structure information and more reasonable interactions can alleviate the feature ambiguity problem with fewer computing resources. Meanwhile, we propose a Separate-and-Joint Overlap Prediction module to solve the low-overlap problem. It adopts separate branches and training steps for feature matching and overlap prediction to reduce negative impacts between these two tasks, and adopts a joint training process to make full use of overlap information for learning better feature matching. Finally, the proposed modules are embedded into a coarse-to-fine pipeline. Our method shows state-of-the-art performance on three benchmark datasets. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2024.127949 |