Pixel-to-Model Distance for Robust Background Reconstruction
Background information is crucial for many video surveillance applications such as object detection and scene understanding. In this paper, we present a novel pixel-to-model (P2M) paradigm for background modeling and restoration in surveillance scenes. In particular, the proposed approach models the...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2016-05, Vol.26 (5), p.903-916 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background information is crucial for many video surveillance applications such as object detection and scene understanding. In this paper, we present a novel pixel-to-model (P2M) paradigm for background modeling and restoration in surveillance scenes. In particular, the proposed approach models the background with a set of context features for each pixel, which are compressively sensed from local patches. We determine whether a pixel belongs to the background according to the minimum P2M distance, which measures the similarity between the pixel and its background model in the space of compressive local descriptors. The pixel feature descriptors of the background model are properly updated with respect to the minimum P2M distance. Meanwhile, the neighboring background model will be renewed according to the maximum P2M distance to handle ghost holes. The P2M distance plays an important role of background reliability in the 3-D spatial-temporal domain of surveillance videos, leading to the robust background model and recovered background videos. We applied the proposed P2M distance for foreground detection and background restoration on synthetic and real-world surveillance videos. Experimental results show that the proposed P2M approach outperforms the state-of-the-art approaches both in indoor and outdoor surveillance scenes. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2015.2424052 |