Estimating Metric Poses of Dynamic Objects Using Monocular Visual-Inertial Fusion
A monocular 3D object tracking system generally has only up-to-scale pose estimation results without any prior knowledge of the tracked object. In this paper, we propose a novel idea to recover the metric scale of an arbitrary dynamic object by optimizing the trajectory of the objects in the world f...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A monocular 3D object tracking system generally has only up-to-scale pose
estimation results without any prior knowledge of the tracked object. In this
paper, we propose a novel idea to recover the metric scale of an arbitrary
dynamic object by optimizing the trajectory of the objects in the world frame,
without motion assumptions. By introducing an additional constraint in the time
domain, our monocular visual-inertial tracking system can obtain continuous six
degree of freedom (6-DoF) pose estimation without scale ambiguity. Our method
requires neither fixed multi-camera nor depth sensor settings for scale
observability, instead, the IMU inside the monocular sensing suite provides
scale information for both camera itself and the tracked object. We build the
proposed system on top of our monocular visual-inertial system (VINS) to obtain
accurate state estimation of the monocular camera in the world frame. The whole
system consists of a 2D object tracker, an object region-based visual bundle
adjustment (BA), VINS and a correlation analysis-based metric scale estimator.
Experimental comparisons with ground truth demonstrate the tracking accuracy of
our 3D tracking performance while a mobile augmented reality (AR) demo shows
the feasibility of potential applications. |
---|---|
DOI: | 10.48550/arxiv.1808.06753 |