Learned Wavelet Video Coding Using Motion Compensated Temporal Filtering

This paper presents an end-to-end trainable wavelet video coder based on motion-compensated temporal filtering. Thereby, it introduces a different coding scheme for learned video compression, which is dominated by residual and conditional coding approaches. By performing discrete wavelet transforms...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023, Vol.11, p.113390-113401
Hauptverfasser: Meyer, Anna, Brand, Fabian, Kaup, Andre
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents an end-to-end trainable wavelet video coder based on motion-compensated temporal filtering. Thereby, it introduces a different coding scheme for learned video compression, which is dominated by residual and conditional coding approaches. By performing discrete wavelet transforms in temporal, horizontal, and vertical dimensions, an explainable framework with spatial and temporal scalability is obtained. This paper investigates a novel trainable motion-compensated temporal filtering module implemented using the lifting scheme. It demonstrates how multiple temporal decomposition levels can be considered during training. Furthermore, larger temporal displacements owing to the coding order are addressed and an extension adapting to different motion strengths during inference is introduced. The experimental analysis compares the proposed approach to learning-based coders and traditional hybrid video coding. Especially at high rates, the approach exhibits promising rate-distortion performance. The proposed method achieves average Bjøntegaard Delta savings of up to 21% over HEVC, and outperforms state-of-the-art learned video coders.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3323873