SENSE: Hyperspectral video object tracker via fusing material and motion cues
•A unified hyperspectral video object tracking method with fusing material and motion cues is proposed.•A spectral-spatial self-expression module is proposed to adaptively obtain complementary false modalities, bridging the band gap issue.•A cross-false modality fusion module is proposed to aggregat...
Gespeichert in:
Veröffentlicht in: | Information fusion 2024-09, Vol.109, p.102395, Article 102395 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A unified hyperspectral video object tracking method with fusing material and motion cues is proposed.•A spectral-spatial self-expression module is proposed to adaptively obtain complementary false modalities, bridging the band gap issue.•A cross-false modality fusion module is proposed to aggregate and enhance the differential-common features of false modalities, obtaining robust object representations.•A motion awareness module is designed that enables continuous tracking of the object in abnormal states.•Comprehensive experiments and in-depth analysis are conducted to validate the proposed method and provide pre-exploration for future research.
Hyperspectral video offers a wealth of material and motion cues about objects. This advantage proves invaluable in addressing the inherent limitations of generic RGB video tracking in complex scenarios such as illumination variation, background clutter, and fast motion. However, existing hyperspectral tracking methods often prioritize the material cue of objects while overlooking the motion cue contained in sequential frames, resulting in unsatisfactory tracking performance, especially in partial or full occlusion. To this end, this article proposes a novel hyperspectral video object tracker via fusing material and motion cues called SENSE that leverages both material and motion cues for hyperspectral object tracking. First, to fully exploit the material cue, we propose a spectral-spatial self-expression (SSSE) module that adaptively converts the hyperspectral image into complementary false modalities while effectively bridging the band gap. Second, we propose a cross-false modality fusion (CFMF) module that aggregates and enhances the differential-common material features derived from false modalities to arouse material awareness for robust object representations. Furthermore, a motion awareness (MA) module is designed, which consists of an awareness selector to determine the reliability of each cue and a motion prediction scheme to handle abnormal states. Extensive experiments are conducted to demonstrate the effectiveness of the proposed method over state-of-the-arts. |
---|---|
ISSN: | 1566-2535 1872-6305 |
DOI: | 10.1016/j.inffus.2024.102395 |