Wavelet-Based Texture Reformation Network for Image Super-Resolution

Most reference-based image super-resolution (RefSR) methods directly leverage the raw features extracted from a pretrained VGG encoder to transfer the matched texture information from a reference image to a low-resolution image. We argue that simply operating on these raw features neglects the influ...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2022, Vol.31, p.2647-2660
Hauptverfasser: Li, Zhen, Kuang, Zeng-Sheng, Zhu, Zuo-Liang, Wang, Hong-Peng, Shao, Xiu-Li
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Most reference-based image super-resolution (RefSR) methods directly leverage the raw features extracted from a pretrained VGG encoder to transfer the matched texture information from a reference image to a low-resolution image. We argue that simply operating on these raw features neglects the influence of irrelevant and redundant information and the importance of abundant high-frequency representations, leading to undesirable texture matching and transfer results. Taking the advantages of wavelet transformation, which represents the contextual and textural information of features at different scales, we propose a Wavelet-based Texture Reformation Network (WTRN) for RefSR. We first decompose the extracted texture features into low-frequency and high-frequency sub-bands and conduct feature matching on the low-frequency component. Based on the correlation map obtained from the feature matching process, we then separately swap and transfer wavelet-domain features at different stages of the network. Furthermore, a wavelet-based texture adversarial loss is proposed to make the network generate more visually plausible textures. Experiments on four benchmark datasets demonstrate that our proposed method outperforms previous RefSR methods both quantitatively and qualitatively. The source code is available at https://github.com/zskuang58/WTRN-TIP .
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2022.3160072