Unconstrained Matching of 2D and 3D Descriptors for 6-DOF Pose Estimation
This paper proposes a novel concept to directly match feature descriptors extracted from 2D images with feature descriptors extracted from 3D point clouds. We use this concept to directly localize images in a 3D point cloud. We generate a dataset of matching 2D and 3D points and their corresponding...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper proposes a novel concept to directly match feature descriptors
extracted from 2D images with feature descriptors extracted from 3D point
clouds. We use this concept to directly localize images in a 3D point cloud. We
generate a dataset of matching 2D and 3D points and their corresponding feature
descriptors, which is used to learn a Descriptor-Matcher classifier. To
localize the pose of an image at test time, we extract keypoints and feature
descriptors from the query image. The trained Descriptor-Matcher is then used
to match the features from the image and the point cloud. The locations of the
matched features are used in a robust pose estimation algorithm to predict the
location and orientation of the query image. We carried out an extensive
evaluation of the proposed method for indoor and outdoor scenarios and with
different types of point clouds to verify the feasibility of our approach.
Experimental results demonstrate that direct matching of feature descriptors
from images and point clouds is not only a viable idea but can also be reliably
used to estimate the 6-DOF poses of query cameras in any type of 3D point cloud
in an unconstrained manner with high precision. |
---|---|
DOI: | 10.48550/arxiv.2005.14502 |