Moving Object Detection Using Tensor-Based Low-Rank and Saliently Fused-Sparse Decomposition

In this paper, we propose a new low-rank and sparse representation model for moving object detection. The model preserves the natural space-time structure of video sequences by representing them as three-way tensors. Then, it operates the low-rank background and sparse foreground decomposition in th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2017-02, Vol.26 (2), p.724-737
Hauptverfasser: Hu, Wenrui, Yang, Yehui, Zhang, Wensheng, Xie, Yuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we propose a new low-rank and sparse representation model for moving object detection. The model preserves the natural space-time structure of video sequences by representing them as three-way tensors. Then, it operates the low-rank background and sparse foreground decomposition in the tensor framework. On the one hand, we use the tensor nuclear norm to exploit the spatio-temporal redundancy of background based on the circulant algebra. On the other, we use the new designed saliently fused-sparse regularizer (SFS) to adaptively constrain the foreground with spatio-temporal smoothness. To refine the existing foreground smooth regularizers, the SFS incorporates the local spatio-temporal geometric structure information into the tensor total variation by using the 3D locally adaptive regression kernel (3D-LARK). What is more, the SFS further uses the 3D-LARK to compute the space-time motion saliency of foreground, which is combined with the l 1 norm and improves the robustness of foreground extraction. Finally, we solve the proposed model with globally optimal guarantee. Extensive experiments on challenging well-known data sets demonstrate that our method significantly outperforms the state-of-the-art approaches and works effectively on a wide range of complex scenarios.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2016.2627803