Progressive Motion Boosting for Video Frame Interpolation

Video frame interpolation has made great progress in estimating advanced optical flow and synthesizing in-between frames sequentially. However, frame interpolation involving various resolutions and motions remains challenging due to limited or fixed pre-trained networks. Inspired by the success of t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2023-01, Vol.25, p.1-14
Hauptverfasser: Xiao, Jing, Xu, Kangmin, Hu, Mengshun, Liao, Liang, Wang, Zheng, Lin, Chia-Wen, Wang, Mi, Satoh, Shin'ichi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Video frame interpolation has made great progress in estimating advanced optical flow and synthesizing in-between frames sequentially. However, frame interpolation involving various resolutions and motions remains challenging due to limited or fixed pre-trained networks. Inspired by the success of the coarse-to-fine scheme for video frame interpolation, i.e. , gradually interpolating frames of different resolutions, we propose a progressive boosting network (ProBoost-Net) based on a multi-scale framework to achieve flexible recurrent scales and then gradually optimize optical flow estimation and frame interpolation. Specifically, we designed a dense motion boosting (DMB) module to transfer features close to real motion to the decoded features from the later scales, which provides complementary information to refine the motion further. Furthermore, to ensure the accuracy of the estimated motion features at each scale, we propose a motion adaptive fusion (MAF) module that adaptively deals with motions with different receptive fields according to the motion conditions. Thanks to the framework's flexible recurrent scales, we can customize the number of scales and make trade-offs between computation and quality depending on the application scenario. Extensive experiments with various datasets demonstrated the superiority of our proposed method over state-of-the-art approaches in various scenarios.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2022.3233310