Compressive Video Sampling With Approximate Message Passing Decoding

In this paper, we apply compressed sensing (CS) to video compression. CS techniques exploit the observation that one needs much fewer random measurements than given by the Shannon-Nyquist sampling theory to recover an object if this object is compressible (i.e., sparse in the spatial domain or in a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2012-09, Vol.22 (9), p.1354-1364
Hauptverfasser: Jianwei Ma, Plonka, G., Hussaini, M. Y.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we apply compressed sensing (CS) to video compression. CS techniques exploit the observation that one needs much fewer random measurements than given by the Shannon-Nyquist sampling theory to recover an object if this object is compressible (i.e., sparse in the spatial domain or in a transform domain). In the CS framework, we can achieve sensing, compression, and denoising simultaneously. We propose a fast and simple online encoding by the application of pseudorandom downsampling of the 2-D fast Fourier transform to video frames. For offline decoding, we apply a modification of the recently proposed approximate message passing (AMP) algorithm. The AMP method has been derived using the statistical concept of "state evolution," and it has been shown to considerably accelerate the convergence rate in special CS-decoding applications. We shall prove that the AMP method can be rewritten as a forward-backward splitting algorithm. This new representation enables us to give conditions that ensure convergence of the AMP method and to modify the algorithm in order to achieve higher robustness. The success of reconstruction methods for video decoding also essentially depends on the chosen transform, where sparsity of the video signals is assumed. We propose incorporating the 3-D dual-tree complex wavelet transform that possesses sufficiently good directional selectivity while being computationally less expensive and less redundant than other directional 3-D wavelet transforms.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2012.2201673