Video Co-Saliency Guided Co-Segmentation

We introduce the term video co-saliency to denote the task of extracting the common noticeable, or salient, regions from multiple relevant videos. The proposed video co-saliency approach accounts for both inter-video foreground correspondences and intra-video saliency stimuli to emphasize the salien...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2018-08, Vol.28 (8), p.1727-1736
Hauptverfasser: Wang, Wenguan, Shen, Jianbing, Sun, Hanqiu, Shao, Ling
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We introduce the term video co-saliency to denote the task of extracting the common noticeable, or salient, regions from multiple relevant videos. The proposed video co-saliency approach accounts for both inter-video foreground correspondences and intra-video saliency stimuli to emphasize the salient foreground regions of video frames and, at the same time, disregard irrelevant visual information of the background. Compared with image co-saliency, it is more reliable due to the utilization of temporal information of video sequence. Benefiting from the discriminability of video co-saliency, we present a unified framework for segmenting out the common salient regions of relevant videos, guided by video co-saliency prior. Unlike naive video co-segmentation approaches employing simple color differences and local motion features, the presented video co-saliency provides a more powerful indicator for the common salient regions, thus conducting video co-segmentation efficiently. Extensive experiments show that the proposed method successfully infers video co-saliency and extracts the common salient regions, outperforming the state-of-the-art methods.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2017.2701279