Deep Reference Frame Generation Method for VVC Inter Prediction Enhancement

In video coding, inter prediction aims to reduce temporal redundancy by using previously encoded frames as references. The quality of reference frames is crucial to the performance of inter prediction. This paper presents a deep reference frame generation method to optimize the inter prediction in V...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2024-05, Vol.34 (5), p.3111-3124
Hauptverfasser: Jia, Jianghao, Zhang, Yuantong, Zhu, Han, Chen, Zhenzhong, Liu, Zizheng, Xu, Xiaozhong, Liu, Shan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In video coding, inter prediction aims to reduce temporal redundancy by using previously encoded frames as references. The quality of reference frames is crucial to the performance of inter prediction. This paper presents a deep reference frame generation method to optimize the inter prediction in Versatile Video Coding (VVC). Specifically, reconstructed frames are sent to a well-designed frame generation network to synthesize a picture similar to the current encoding frame. The synthesized picture serves as an additional reference frame inserted into the reference picture list (RPL) to provide a more reliable reference for subsequent motion estimation (ME) and motion compensation (MC). The frame generation network employs optical flow to predict motion precisely. Moreover, an optical flow reorganization strategy is proposed to enable bi-directional and uni-directional predictions with only a single network architecture. To reasonably apply our method to VVC, we further introduce a normative modification of the temporal motion vector prediction (TMVP). Integrated into the VVC reference software VTM-15.0, the deep reference frame generation method achieves coding efficiency improvements of 5.22%, 3.61%, and 3.83% for the Y component under random access (RA), low delay B (LDB), and low delay P (LDP) configurations, respectively. The proposed method has been discussed in Joint Video Exploration Team (JVET) meeting and is currently part of Exploration Experiments (EE) for further study.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2023.3299410