Efficient Scale Estimation Methods using Lightweight Deep Convolutional Neural Networks for Visual Tracking
In recent years, visual tracking methods that are based on discriminative correlation filters (DCF) have been very promising. However, most of these methods suffer from a lack of robust scale estimation skills. Although a wide range of recent DCF-based methods exploit the features that are extracted...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, visual tracking methods that are based on discriminative
correlation filters (DCF) have been very promising. However, most of these
methods suffer from a lack of robust scale estimation skills. Although a wide
range of recent DCF-based methods exploit the features that are extracted from
deep convolutional neural networks (CNNs) in their translation model, the scale
of the visual target is still estimated by hand-crafted features. Whereas the
exploitation of CNNs imposes a high computational burden, this paper exploits
pre-trained lightweight CNNs models to propose two efficient scale estimation
methods, which not only improve the visual tracking performance but also
provide acceptable tracking speeds. The proposed methods are formulated based
on either holistic or region representation of convolutional feature maps to
efficiently integrate into DCF formulations to learn a robust scale model in
the frequency domain. Moreover, against the conventional scale estimation
methods with iterative feature extraction of different target regions, the
proposed methods exploit proposed one-pass feature extraction processes that
significantly improve the computational efficiency. Comprehensive experimental
results on the OTB-50, OTB-100, TC-128 and VOT-2018 visual tracking datasets
demonstrate that the proposed visual tracking methods outperform the
state-of-the-art methods, effectively. |
---|---|
DOI: | 10.48550/arxiv.2004.02933 |