Robust Multi-Object Tracking with Local Appearance and Stable Motion Models
Multi-object tracking (MOT) has been steadily studied for video understanding in computer vision. However, existing MOT frameworks usually employ straightforward appearance or motion models and may struggle in dynamic environments with similar appearance and complex motion. In this paper, we present...
Gespeichert in:
Veröffentlicht in: | IEEE access 2023-01, Vol.11, p.1-1 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-object tracking (MOT) has been steadily studied for video understanding in computer vision. However, existing MOT frameworks usually employ straightforward appearance or motion models and may struggle in dynamic environments with similar appearance and complex motion. In this paper, we present a robust MOT framework with local appearance and stable motion models to overcome these two hindrances. The framework incorporates object and local part detectors, a feature extractor, a keypoint extractor, and a data association method. For the data association, we utilize five types of similarity metrics and a cascaded matching strategy. The local appearance model is suggested to be used additionally with global appearance features of full bounding boxes to obtain discriminative features even for objects with a similar appearance. At the same time, the stable motion model considers the core of the body as the central point of the object and subdivides the body using a novel 12-tuple Kalman state vector to analyze complex motion. As a result, our new tracker achieves state-of-the-art performance on the DanceTrack test set, surpassing all other listed tracking systems in terms of both detection and tracking quality metrics, including HOTA, DetA, AssA, and MOTA. The source code is available at https://github.com/Jubi-Hwang/Robust-MOT-with-Local-Appearance-and-Stable-Motion-Models. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2023.3296731 |