Decision Controller for Object Tracking With Deep Reinforcement Learning

There are many decisions which are usually made heuristically both in single object tracking (SOT) and multiple object tracking (MOT). Existing methods focus on tackling decision-making problems on special tasks in tracking without a unified framework. In this paper, we propose a decision controller...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.28069-28079
Hauptverfasser: Zhong, Zhao, Yang, Zichen, Feng, Weitao, Wu, Wei, Hu, Yangyang, Liu, Cheng-Lin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:There are many decisions which are usually made heuristically both in single object tracking (SOT) and multiple object tracking (MOT). Existing methods focus on tackling decision-making problems on special tasks in tracking without a unified framework. In this paper, we propose a decision controller (DC) which is generally applicable to both SOT and MOT tasks. The controller learns an optimal decision-making policy with a deep reinforcement learning algorithm that maximizes long term tracking performance without supervision. To prove the generalization ability of DC, we apply it to the challenging ensemble problem in SOT and tracker-detector switching problem in MOT. In the tracker ensemble experiment, our ensemble-based tracker can achieve leading performance in VOT2016 challenge and the light version can also get a state-of-the-art result at 50 FPS. In the MOT experiment, we utilize the tracker-detector switching controller to enable real-time online tracking with competitive performance and {10\times } speed up.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2900476