Visual tracking based on a unified tracking-and-detection framework with spatial-temporal consistency filtering
Exploring the advantages of combining convolutional features and discriminative correlation filters has recently attracted a great deal of attention in visual tracking fields. In this paper, we propose a spatial-temporal consistency filtering (STCF) tracker in a unified tracking-and-detection framew...
Gespeichert in:
Veröffentlicht in: | Computers & electrical engineering 2019-12, Vol.80, p.106453, Article 106453 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Exploring the advantages of combining convolutional features and discriminative correlation filters has recently attracted a great deal of attention in visual tracking fields. In this paper, we propose a spatial-temporal consistency filtering (STCF) tracker in a unified tracking-and-detection framework. First, we apply a continuous correlation filter that seamlessly embeds multi-domain multi-scale feature maps to exploit richer appearance representation. We then introduce a novel domain-aware detector for generating fine-grained deep features and highly-likely target candidates. To handle target drift issues, we designed spatial-temporal consistency filtering as a recovery mechanism for target re-identification and scale re-estimation. We additionally designed a model reliability indicator to avoid potential model degeneration and contamination. Compared with existing state-of-the-art trackers, our STCF tracker can achieve comparable accuracy and robust performance, and we demonstrate that with comprehensive experiments on Online Tracking Benchmark (OTB-2015), Visual Object Tracking challenge (VOT-2016 and VOT-2017) benchmarks. |
---|---|
ISSN: | 0045-7906 1879-0755 |
DOI: | 10.1016/j.compeleceng.2019.106453 |