A Tensor Foreground-Background Separation Algorithm Based on Dynamic Dictionary Update and Active Contour Detection
Foreground-background separation of surveillance video, that models static background and extracts moving foreground simultaneously, attracts increasing attentions in building a smart city. Conventional techniques towards this always consider the background as primary target and tend to adopt low-ra...
Gespeichert in:
Veröffentlicht in: | IEEE access 2020, Vol.8, p.88259-88272 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Foreground-background separation of surveillance video, that models static background and extracts moving foreground simultaneously, attracts increasing attentions in building a smart city. Conventional techniques towards this always consider the background as primary target and tend to adopt low-rank constraint as its estimator, which provides finite (equal to the value of rank) alternatives when constructing the background. However, in practical missions, although general sketch of background is stable, some details change constantly. Aimed at this, we propose to represent the general background by a linear combination of some atoms and record the detailed background by spatiotemporal clustered patches. Then, the moving foreground is considered as a mixture of active contours and continuous contents. Eventually, joint optimization is conducted under a unified framework, i.e., alternating direction multipliers method (ADMM), and produces our tensor model for hierarchical background and hierarchical foreground separation (THHS). The employed tensor space, which agrees with the instinct structure of video data, benefits all the spatiotemporal designs in both background modular and foreground part. Experimental results show that THHS is more adaptive to the dynamic background and produces more accurate foreground when compared against current state-of-the-art techniques. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2020.2992494 |