Robust Temporal Activity Templates Using Higher Order Statistics
A robust, theoretically founded approach for the extraction of temporal templates corresponding to areas of motion in video, is presented. Higher order statistics (kurtosis) are employed to extract activity areas, i.e., binary masks indicating which pixels in a video are active. The application of t...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2009-12, Vol.18 (12), p.2756-2768 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A robust, theoretically founded approach for the extraction of temporal templates corresponding to areas of motion in video, is presented. Higher order statistics (kurtosis) are employed to extract activity areas, i.e., binary masks indicating which pixels in a video are active. The application of the kurtosis on illumination changes modeled as Gaussians and mixture of Gaussians is shown to be sensitive to outliers for both models, thus correctly localizing active pixels. Activity areas are compared to existing, difference-based temporal templates, known as motion energy images, and the robustness of both categories of temporal templates to additive noise is analyzed theoretically. Experiments with numerous real videos with additive noise, both indoors and outdoors, are conducted to compare the robustness of the activity areas and motion energy images, and their temporal extensions, the activity history areas, and motion history images. As expected from the theoretical analysis, the kurtosis-based activity areas prove to be more robust than the difference-based templates. Challenging videos containing occlusions, varying backgrounds, and shadows are also examined, and it is shown that the proposed approach outperforms the difference-based method for these cases, as well, consistently providing reliable localization of activity under a wide range of difficult circumstances. The proposed approach provides good results at a very low computational cost, and without requiring prior knowledge about the scene, nor training of any kind. |
---|---|
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2009.2029595 |