Rate-Distortion Analysis of Motion-Compensated Interpolation at the Decoder in Distributed Video Coding

This letter analyzes the coding efficiency of distributed video coding (DVC) schemes that perform motion-compensated interpolation at the decoder. The decoder has access only to the key frames, when generating the side information for intermediate frames. Therefore, the true motion field necessary f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE signal processing letters 2007-09, Vol.14 (9), p.625-628
Hauptverfasser: Tagliasacchi, M., Frigerio, L., Tubaro, S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This letter analyzes the coding efficiency of distributed video coding (DVC) schemes that perform motion-compensated interpolation at the decoder. The decoder has access only to the key frames, when generating the side information for intermediate frames. Therefore, the true motion field necessary for this operation is not directly available, and the motion vectors must be estimated at the decoder side, thus introducing displacement estimation errors. The accuracy of the motion-compensated interpolation at the decoder depends on several factors: 1, the overall motion complexity; 2, the temporal coherence of the motion field; and 3, the temporal distance between successive key frames. Adopting a state-space model and a Kalman filtering framework, we obtain an estimate of the displacement error variance. This is used to determine the rate-distortion function of the overall coding scheme, that takes into account both intra-coded key frames and DVC-coded frames. The proposed model shows that motion-compensated interpolation is unable to achieve the coding efficiency of conventional motion-compensated predictive coding. In addition, the model provides a good estimate of the group of pictures size that optimizes the coding efficiency. Experimental results on real video sequences validate the results of the proposed model.
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2007.896187