Figure-ground segmentation from occlusion

Layered video representations are increasingly popular; see for a recent review. Segmentation of moving objects is a key step for automating such representations. Current motion segmentation methods either fail to segment moving objects in low-textured regions or are computationally very expensive....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2005-08, Vol.14 (8), p.1109-1124
Hauptverfasser: Aguiar, P.M.Q., Moura, J.M.F.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Layered video representations are increasingly popular; see for a recent review. Segmentation of moving objects is a key step for automating such representations. Current motion segmentation methods either fail to segment moving objects in low-textured regions or are computationally very expensive. This paper presents a computationally simple algorithm that segments moving objects, even in low-texture/low-contrast scenes. Our method infers the moving object templates directly from the image intensity values, rather than computing the motion field as an intermediate step. Our model takes into account the rigidity of the moving object and the occlusion of the background by the moving object. We formulate the segmentation problem as the minimization of a penalized likelihood cost function and present an algorithm to estimate all the unknown parameters: the motions, the template of the moving object, and the intensity levels of the object and of the background pixels. The cost function combines a maximum likelihood estimation term with a term that penalizes large templates. The minimization algorithm performs two alternate steps for which we derive closed-form solutions. Relaxation improves the convergence even when low texture makes it very challenging to segment the moving object from the background. Experiments demonstrate the good performance of our method.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2005.851712