Video Object Segmentation of Dynamic Scenes with Large Displacements

Segmenting foreground objects in unconstrained dynamic scenes still remains a difficult problem. We present a novel unsupervised segmentation approach that allows robust object segmentation of dynamic scenes with large displacements. To make this possible, we project motion based foreground region h...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEICE Transactions on Information and Systems 2015/09/01, Vol.E98.D(9), pp.1719-1723
Hauptverfasser: ZHANG, Yinhui, HE, Zifen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Segmenting foreground objects in unconstrained dynamic scenes still remains a difficult problem. We present a novel unsupervised segmentation approach that allows robust object segmentation of dynamic scenes with large displacements. To make this possible, we project motion based foreground region hypotheses generated via standard optical flow onto visual saliency regions. The motion hypotheses correspond to inside seeds mapping of the motion boundary. For visual saliency, we generalize the image signature method from images to videos to delineate saliency mapping of object proposals. The mapping of image signatures estimated in Discrete Cosine Transform (DCT) domain favor stand-out regions in the human visual system. We leverage a Markov random field built on superpixels to impose both spatial and temporal consistence constraints on the motion-saliency combined segments. Projecting salient regions via an image signature with inside mapping seeds facilitates segmenting ambiguous objects from unconstrained dynamic scenes in presence of large displacements. We demonstrate the performance on fourteen challenging unconstrained dynamic scenes, compare our method with two state-of-the-art unsupervised video segmentation algorithms, and provide quantitative and qualitative performance comparisons.
ISSN:0916-8532
1745-1361
DOI:10.1587/transinf.2015EDL8062