Combined Image- and World-Space Tracking in Traffic Scenes
Tracking in urban street scenes plays a central role in autonomous systems such as self-driving cars. Most of the current vision-based tracking methods perform tracking in the image domain. Other approaches, eg based on LIDAR and radar, track purely in 3D. While some vision-based tracking methods in...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Tracking in urban street scenes plays a central role in autonomous systems
such as self-driving cars. Most of the current vision-based tracking methods
perform tracking in the image domain. Other approaches, eg based on LIDAR and
radar, track purely in 3D. While some vision-based tracking methods invoke 3D
information in parts of their pipeline, and some 3D-based methods utilize
image-based information in components of their approach, we propose to use
image- and world-space information jointly throughout our method. We present
our tracking pipeline as a 3D extension of image-based tracking. From enhancing
the detections with 3D measurements to the reported positions of every tracked
object, we use world-space 3D information at every stage of processing. We
accomplish this by our novel coupled 2D-3D Kalman filter, combined with a
conceptually clean and extendable hypothesize-and-select framework. Our
approach matches the current state-of-the-art on the official KITTI benchmark,
which performs evaluation in the 2D image domain only. Further experiments show
significant improvements in 3D localization precision by enabling our coupled
2D-3D tracking. |
---|---|
DOI: | 10.48550/arxiv.1809.07357 |