SVD-Based Tensor-Completion Technique for Background Initialization
Extracting the background from a video in the presence of various moving patterns is the focus of several background-initialization approaches. To model the scene background using rank-one matrices, this paper proposes a background-initialization technique that relies on the singular-value decomposi...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2018-06, Vol.27 (6), p.3114-3126 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Extracting the background from a video in the presence of various moving patterns is the focus of several background-initialization approaches. To model the scene background using rank-one matrices, this paper proposes a background-initialization technique that relies on the singular-value decomposition (SVD) of spatiotemporally extracted slices from the video tensor. The proposed method is referred to as spatiotemporal slice-based SVD (SS-SVD). To determine the SVD components that best model the background, a depth analysis of the computation of the left/right singular vectors and singular values is performed, and the relationship with tensor-tube fibers is determined. The analysis proves that a rank-1 matrix extracted from the first left and right singular vectors and singular value represents an efficient model of the scene background. The performance of the proposed SS-SVD method is evaluated using 93 complex video sequences of different challenges, and the method is compared with state-of-the-art tensor/matrix completion-based methods, statistical-based methods, search-based methods, and labeling-based methods. The results not only show better performance over most of the tested challenges, but also demonstrate the capability of the proposed technique to solve the background-initialization problem in a less computational time and with fewer frames. |
---|---|
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2018.2817045 |