Robust Visual Tracking by Motion Analyzing
In recent years, Video Object Segmentation (VOS) has emerged as a complementary method to Video Object Tracking (VOT). VOS focuses on classifying all the pixels around the target, allowing for precise shape labeling, while VOT primarily focuses on the approximate region where the target might be. Ho...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, Video Object Segmentation (VOS) has emerged as a
complementary method to Video Object Tracking (VOT). VOS focuses on classifying
all the pixels around the target, allowing for precise shape labeling, while
VOT primarily focuses on the approximate region where the target might be.
However, traditional segmentation modules usually classify pixels frame by
frame, disregarding information between adjacent frames.
In this paper, we propose a new algorithm that addresses this limitation by
analyzing the motion pattern using the inherent tensor structure. The tensor
structure, obtained through Tucker2 tensor decomposition, proves to be
effective in describing the target's motion. By incorporating this information,
we achieved competitive results on Four benchmarks LaSOT\cite{fan2019lasot},
AVisT\cite{noman2022avist}, OTB100\cite{7001050}, and
GOT-10k\cite{huang2019got} LaSOT\cite{fan2019lasot} with SOTA. Furthermore, the
proposed tracker is capable of real-time operation, adding value to its
practical application. |
---|---|
DOI: | 10.48550/arxiv.2309.03247 |