Foreground detection using spatiotemporal projection kernels
In this paper, we propose a novel video foreground detection method that exploits the statistics of 3D spacetime patches. 3D space-time patches are characterized by means of the subspace they span. As the complexity of real-time systems prohibits performing this modeling directly on the raw pixel da...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we propose a novel video foreground detection method that exploits the statistics of 3D spacetime patches. 3D space-time patches are characterized by means of the subspace they span. As the complexity of real-time systems prohibits performing this modeling directly on the raw pixel data, we propose a novel framework in which spatiotemporal data is sequentially reduced in two stages. The first stage reduces the data using a cascade of linear projections of 3D space-time patches onto a small set of 3D Walsh-Hadamard (WH) basis functions known for its energy compaction of natural images and videos. This stage is efficiently implemented using the Gray-Code filtering scheme [2] requiring only 2 operations per projection. In the second stage, the data is further reduced by applying PCA directly to the WH coefficients exploiting the local statistics in an adaptive manner. Unlike common techniques, this spatiotemporal adaptive projection exploits window appearance as well as its dynamic characteristics. Tests show that the proposed method outperforms recent foreground detection methods and is suitable for real-time implementation on streaming video. |
---|---|
ISSN: | 1063-6919 |
DOI: | 10.1109/CVPR.2012.6248056 |