Video Demoiréing With Deep Temporal Color Embedding and Video-Image Invertible Consistency

Demoiréing is the task of removing moiré patterns, which are commonly caused by the interference between the screen and digital cameras. Although research on single image demoiréing has made great progress, research on video demoiréing has received less attention from the community. Video demoiréing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2024, Vol.26, p.7386-7397
Hauptverfasser: Liu, Lin, An, Junfeng, Yuan, Shanxin, Zhou, Wengang, Li, Houqiang, Wang, Yanfeng, Tian, Qi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Demoiréing is the task of removing moiré patterns, which are commonly caused by the interference between the screen and digital cameras. Although research on single image demoiréing has made great progress, research on video demoiréing has received less attention from the community. Video demoiréing poses a new set of challenges. First, most existing video restoration algorithms rely on multi-resolution pixel-based alignment, which can cause damage to the details of the predicted results. Second, these algorithms are based on flow-based loss or relation-based loss, making it difficult to handle the large motions of adjacent frames while keeping temporal consistency intact. To address these challenges, we present a novel deep learning-based approach called the Deep Temporal Color Embedding network (DTCENet) that employs an invertible network to align distortion color patches in a patch-based embedding framework. DTCENet can well preserve details while eliminate color distortions. Furthermore, we introduce a video-image invertible loss function to effectively handle the color inconsistent problem of adjacent frames. Our approach shows promising results in demoiréing videos, with improved performance over existing state-of-the-art algorithms. Our method gets about 10% improvements in terms of LPIPS and 10.3% improvements in terms of FID compared with the recent SOTA methods.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2024.3366765