Salient Object Detection With Spatiotemporal Background Priors for Video

Saliency detection for images has been studied for many years, for which a lot of methods have been designed. In saliency detection, background priors, which are often regarded as pseudo-background, are effective clues to find salient objects in images. Although image boundary is commonly used as ba...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2017-07, Vol.26 (7), p.3425-3436
Hauptverfasser: Xi, Tao, Zhao, Wei, Wang, Han, Lin, Weisi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Saliency detection for images has been studied for many years, for which a lot of methods have been designed. In saliency detection, background priors, which are often regarded as pseudo-background, are effective clues to find salient objects in images. Although image boundary is commonly used as background priors, it does not work well for images of complex scenes and videos. In this paper, we explore how to identify the background priors for a video and propose a saliency-based method to detect the visual objects by using the background priors. For a video, we integrate multiple pairs of scale-invariant feature transform flows from long-range frames, and a bidirectional consistency propagation is conducted to obtain the accurate and sufficient temporal background priors, which are combined with spatial background priors to generate spatiotemporal background priors. Next, a novel dual-graph-based structure using spatiotemporal background priors is put forward in the computation of saliency maps, fully taking advantage of appearance and motion information in videos. Experimental results on different challenging data sets show that the proposed method robustly and accurately detects the video objects in both simple and complex scenes and achieves better performance compared with other the state-of-the-art video saliency models.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2016.2631900