Toward Accurate Pixelwise Object Tracking via Attention Retrieval

Pixelwise single object tracking is challenging due to the competition of running speeds and segmentation accuracy. Current state-of-the-art real-time approaches seamlessly connect tracking and segmentation by sharing computation of the backbone network, e.g. , SiamMask and D3S fork a light branch f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2021, Vol.30, p.8553-8566
Hauptverfasser: Zhang, Zhipeng, Liu, Yufan, Li, Bing, Hu, Weiming, Peng, Houwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Pixelwise single object tracking is challenging due to the competition of running speeds and segmentation accuracy. Current state-of-the-art real-time approaches seamlessly connect tracking and segmentation by sharing computation of the backbone network, e.g. , SiamMask and D3S fork a light branch from the tracking model to predict segmentation mask. Although efficient, directly reusing features from tracking networks may harm the segmentation accuracy, since background clutter in the backbone feature tends to introduce false positives in segmentation. To mitigate this problem, we propose a unified tracking-retrieval-segmentation framework consisting of an attention retrieval network (ARN) and an iterative feedback network (IFN). Instead of segmenting the target inside the bounding box, the proposed framework performs soft spatial constraints on backbone features to obtain an accurate global segmentation map. Concretely, in ARN, a look-up-table (LUT) is first built by sufficiently using the information of the first frame. By retrieving it, a target-aware attention map is generated to suppress the negative influence of background clutter. To ulteriorly refine the contour of the segmentation, IFN iteratively enhances the features at different resolutions by taking the predicted mask as feedback guidance. Our framework sets a new state of the art on the recent pixelwise tracking benchmark VOT2020 and runs at 40 fps. Notably, the proposed model surpasses SiamMask by 11.7/4.2/5.5 points on VOT2020, DAVIS2016, and DAVIS2017, respectively. Code is available at https://github.com/JudasDie/SOTS .
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2021.3117077