Robust Estimation of Absolute Camera Pose via Intersection Constraint and Flow Consensus
Estimating the absolute camera pose requires 3D-to-2D correspondences of points and/or lines. However, in practice, these correspondences are inevitably corrupted by outliers, which affects the pose estimation. Existing outlier removal strategies for robust pose estimation have some limitations. The...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2020-01, Vol.29, p.6615-6629 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Estimating the absolute camera pose requires 3D-to-2D correspondences of points and/or lines. However, in practice, these correspondences are inevitably corrupted by outliers, which affects the pose estimation. Existing outlier removal strategies for robust pose estimation have some limitations. They are only applicable to points, rely on prior pose information, or fail to handle high outlier ratios. By contrast, we propose a general and accurate outlier removal strategy. It can be integrated with various existing pose estimation methods originally vulnerable to outliers, and is applicable to points, lines, and the combination of both. Moreover, it does not rely on any prior pose information. Our strategy has a nested structure composed of the outer and inner modules. First, our outer module leverages our intersection constraint , i.e., the projection rays or planes defined by inliers intersect at the camera center. Our outer module alternately computes the inlier probabilities of correspondences and estimates the camera pose. It can run reliably and efficiently under high outlier ratios. Second, our inner module exploits our flow consensus . The 2D displacement vectors or 3D directed arcs generated by inliers exhibit a common directional regularity, i.e., follow a dominant trend of flow. Our inner module refines the inlier probabilities obtained at each iteration of our outer module. This refinement improves the accuracy and facilitates the convergence of our outer module. Experiments on both synthetic data and real-world images have shown that our method outperforms state-of-the-art approaches in terms of accuracy and robustness. |
---|---|
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2020.2992336 |