Object tracking in the presence of shaking motions

Visual tracking can be particularly interpreted as a process of searching for targets and optimizing the searching. In this paper, we present a novel tracker framework for tracking shaking targets. We formulate the underlying geometrical relevance between a search scope and a target displacement. A...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications 2019-10, Vol.31 (10), p.5917-5934
Hauptverfasser: Dai, Manna, Cheng, Shuying, He, Xiangjian, Wang, Dadong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Visual tracking can be particularly interpreted as a process of searching for targets and optimizing the searching. In this paper, we present a novel tracker framework for tracking shaking targets. We formulate the underlying geometrical relevance between a search scope and a target displacement. A uniform sampling among the search scopes is implemented by sliding windows. To alleviate any possible redundant matching, we propose a double-template structure comprising of initial and previous tracking results. The element-wise similarities between a template and its candidates are calculated by jointly using kernel functions which provide a better outlier rejection property. The STC algorithm is used to improve the tracking results by maximizing a confidence map incorporating temporal and spatial context cues about the tracked targets. For better adaptation to appearance variations, we employ a linear interpolation to update the context prior probability of the STC method. Both qualitative and quantitative evaluations are performed on all sequences that contain shaking motions and are selected from the OTB-50 challenging benchmark. The proposed approach is compared with and outperforms 12 state-of-the-art tracking methods on the selected sequences while running on MATLAB without code optimization. We have also performed further experiments on the whole OTB-50 and VOT 2015 datasets. Although the most of sequences in these two datasets do not contain motion blur that this paper is focusing on, the results of our method are still favorable compared with all of the state-of-the-art approaches.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-018-3387-3