Joint channel-spatial attention network for super-resolution image quality assessment
Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network...
Gespeichert in:
Veröffentlicht in: | Applied intelligence (Dordrecht, Netherlands) Netherlands), 2022-12, Vol.52 (15), p.17118-17132 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network model referred to as a joint channel-spatial attention network (JCSAN) for no-reference SR image quality assessment (NR-SRIQA). The JCSAN consists of a two-stream branch which learns the middle level features and the primary level features to jointly quantify the degradation of SR images. In the first middle level feature learning subnetwork, we embed a two-stage convolutional block attention module (CBAM) to capture discriminative perceptual feature maps through the channel and spatial attention in sequence. While the other shallow convolutional subnetwork is adopted to learn dense and primary level textural feature maps. In order to yield more accurate quality estimate to SR images, we integrate a unit aggregation gate (AG) module to dynamically distribute the channel-weights to the two feature maps from different branches. Extensive experimental results on two benchmark datasets verify the superiority of the proposed JCSAN-based quality metric in comparing with other state-of-the-art competitors. |
---|---|
ISSN: | 0924-669X 1573-7497 |
DOI: | 10.1007/s10489-022-03338-1 |