Implicit Dual-Domain Convolutional Network for Robust Color Image Compression Artifact Reduction

Several dual-domain convolutional neural network-based methods show outstanding performance in reducing image compression artifacts. However, they are unable to handle color images as the compression processes for gray scale and color images are different. Moreover, these methods train a specific mo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2020-11, Vol.30 (11), p.3982-3994
Hauptverfasser: Zheng, Bolun, Chen, Yaowu, Tian, Xiang, Zhou, Fan, Liu, Xuesong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Several dual-domain convolutional neural network-based methods show outstanding performance in reducing image compression artifacts. However, they are unable to handle color images as the compression processes for gray scale and color images are different. Moreover, these methods train a specific model for each compression quality, and they require multiple models to achieve different compression qualities. To address these problems, we proposed an implicit dual-domain convolutional network (IDCN) with a pixel position labeling map and quantization tables as inputs. We proposed an extractor-corrector framework-based dual-domain correction unit (DCU) as the basic component to formulate the IDCN; the implicit dual-domain translation allows the IDCN to handle color images with discrete cosine transform (DCT)-domain priors. A flexible version of IDCN (IDCN-f) was also developed to handle a wide range of compression qualities. Experiments for both objective and subjective evaluations on benchmark datasets show that IDCN is superior to the state-of-the-art methods and IDCN-f exhibits excellent abilities to handle a wide range of compression qualities with a little trade-off against performance; further, it demonstrates great potential for practical applications.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2019.2931045