Multi-view confidence-aware method for adaptive Siamese tracking with shrink-enhancement loss

Many Siamese tracking algorithms attempt to enhance the target representation through target aware. However, the tracking results are often disturbed by the target-like background. In this paper, we propose a multi-view confidence-aware method for adaptive Siamese tracking. Firstly, a shrink-enhance...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern analysis and applications : PAA 2023-08, Vol.26 (3), p.1407-1424
Hauptverfasser: Zhang, Huanlong, Ma, Zonghao, Zhang, Jie, Chen, Fuguo, Song, Xiaohui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Many Siamese tracking algorithms attempt to enhance the target representation through target aware. However, the tracking results are often disturbed by the target-like background. In this paper, we propose a multi-view confidence-aware method for adaptive Siamese tracking. Firstly, a shrink-enhancement loss is designed to select channel features that are more sensitive to the target, which reduces the effect of simple background negative samples and enhances the contribution of difficult background negative samples, so as to achieve the balance of the sample data. Secondly, to enhance the reliability of the confidence map, a multi-view confidence-aware method is constructed. It integrates the response maps of template, foreground, and background through Multi-view Confidence Guide to highlight target features and suppress background interference, thus obtaining a more discriminative target response map. Finally, to better accommodate variable tracking scenarios, we design a state estimation criterion for tracking results and adaptive update the template. Experimental results show that the present tracking approach performs well, especially on six benchmark datasets, including OTB-2015, TC-128, UAV-123, DTB, VOT2016, and VOT-2019.
ISSN:1433-7541
1433-755X
DOI:10.1007/s10044-023-01169-5