3D Multi-Object Tracking Based on Dual-Tracker and D-S Evidence Theory

Most of the current self-driving cars are equipped with a variety of sensors (such as lidar, camera), and it has become a trend to integrate multi-sensor information to achieve 3D multi-object tracking (MOT). In addition, unlike the perfect experimental conditions on public datasets, cars on real ro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on intelligent vehicles 2023-03, Vol.8 (3), p.2426-2436
Hauptverfasser: Ma, Yuanzhi, Zhang, Jindong, Qin, Guihe, Jin, Jingyi, Zhang, Kunpeng, Pan, Dongyu, Chen, Mai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Most of the current self-driving cars are equipped with a variety of sensors (such as lidar, camera), and it has become a trend to integrate multi-sensor information to achieve 3D multi-object tracking (MOT). In addition, unlike the perfect experimental conditions on public datasets, cars on real roads may experience a 3D sensor failure due to weather, mutual interference, etc. In this paper, we propose a dual-tracker-based 3D MOT system, which fuses 2D and 3D information to track objects and can still ensure good tracking accuracy when the 3D sensor fails in a single frame or multiple consecutive frames. Among them, two internal trackers complete the data association of 3D and 2D information, respectively. However, the outputs of the two internal trackers may conflict. First, we calculate the degree of association of potential matched pairs from the features extracted by each sensor, and then normalize it into the mass function of D-S evidence theory. Finally, we combine the evidence to obtain the final matched pairs of detection instances and track estimates. We do extensive experiments on the KITTI MOT dataset and simulate sensor failure scenarios by ignoring the output of the 3D detector. Using the latest evaluation measure for comparison, the results show that our method outperforms other advanced open-source 3D MOTs in a variety of scenarios.
ISSN:2379-8858
2379-8904
DOI:10.1109/TIV.2022.3216102