Quality Enhancement Network via Multi-Reconstruction Recursive Residual Learning for Video Coding

Lossy compression algorithms introduce multiple compression artifacts that severely decrease visual quality. These compression artifacts are highly related to texture contents, and the hierarchical coding units decision structure also brings multi-scale similarity to these artifacts. Current loop fi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE signal processing letters 2019-04, Vol.26 (4), p.557-561
Hauptverfasser: Yu, Liangwei, Shen, Liquan, Yang, Hao, Wang, Lu, An, Ping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Lossy compression algorithms introduce multiple compression artifacts that severely decrease visual quality. These compression artifacts are highly related to texture contents, and the hierarchical coding units decision structure also brings multi-scale similarity to these artifacts. Current loop filters fail to utilize these characteristics to comprehensively remove compression artifacts. To this end, this letter proposes a novel quality enhancement method by adopting a multi-reconstruction recurrent residual network (MRRN). In particular, a modified recursive residual structure is designed to capture the multi-scale similarity of compression artifact. To effectively enhance frames with uneven noise, a multi-reconstruction structure is proposed, which outputs images with different denoise ratios and adaptively fuses them. Experimental results show that the proposed MRRN can improve coding efficiency up to 15.1% compared with the original loop filter in high-efficiency video coding. Averagely, 6.7%, 7.8%, 7.6% BD-rate reduction is achieved for all intra, low-delay P, and low-delay B, respectively. Meanwhile, as a quality enhancement method performed at encoder side, MRRN also achieves a good balance between coding performance and computational complexity compared to the state-of-the-art methods.
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2019.2899253