DSMA: Reference-Based Image Super-Resolution Method Based on Dual-View Supervised Learning and Multi-Attention Mechanism

Reference image based super-resolution methods (RefSR) have made rapid and remarkable progress in the field of image super-resolution (SR) in recent years by introducing additional high-resolution (HR) images to enhance the recovery of low-resolution (LR) images. The existing RefSR methods can rely...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.54649-54659
Hauptverfasser: Liu, Xin, Li, Jing, Duan, Tingting, Li, Jiangtao, Wang, Ye
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Reference image based super-resolution methods (RefSR) have made rapid and remarkable progress in the field of image super-resolution (SR) in recent years by introducing additional high-resolution (HR) images to enhance the recovery of low-resolution (LR) images. The existing RefSR methods can rely on implicit correspondence matching to transfer the HR texture from the reference image (Ref) to compensate for the information loss in the input image. However, the differences between low-resolution input images and high-resolution reference images still affects the effective utilization of Ref images, so it is an important challenge to make full use of the information in Ref images to improve the SR performance. In this paper, we propose an image super-resolution method based on dual-view supervised learning and multi-attention mechanism (DSMA). It enhances the learning of important detail features of Ref images and weakens the interference of noisy information by introducing the multi-attention mechanism, while employing dual-view supervision to motivate the network to learn more accurate feature representations. Quantitative and qualitative experiments on these benchmarks, i.e., CUFED5, Urban100 and Manga109, show that DSMA outperforms the state-of-the-art baselines with significant improvements.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3174194