Joint channel-spatial attention network for super-resolution image quality assessment

Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2022-12, Vol.52 (15), p.17118-17132
Hauptverfasser: Zhang, Tingyue, Zhang, Kaibing, Xiao, Chuan, Xiong, Zenggang, Lu, Jian
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 17132
container_issue 15
container_start_page 17118
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 52
creator Zhang, Tingyue
Zhang, Kaibing
Xiao, Chuan
Xiong, Zenggang
Lu, Jian
description Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network model referred to as a joint channel-spatial attention network (JCSAN) for no-reference SR image quality assessment (NR-SRIQA). The JCSAN consists of a two-stream branch which learns the middle level features and the primary level features to jointly quantify the degradation of SR images. In the first middle level feature learning subnetwork, we embed a two-stage convolutional block attention module (CBAM) to capture discriminative perceptual feature maps through the channel and spatial attention in sequence. While the other shallow convolutional subnetwork is adopted to learn dense and primary level textural feature maps. In order to yield more accurate quality estimate to SR images, we integrate a unit aggregation gate (AG) module to dynamically distribute the channel-weights to the two feature maps from different branches. Extensive experimental results on two benchmark datasets verify the superiority of the proposed JCSAN-based quality metric in comparing with other state-of-the-art competitors.
doi_str_mv 10.1007/s10489-022-03338-1
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2737807216</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2737807216</sourcerecordid><originalsourceid>FETCH-LOGICAL-c249t-13b3afcb4769af7f3f69848f03e904958a9132e03f360a61697d0560ed4c39783</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wNOC5-jko8nmKMVPBC8WvIV0m9St22SbZJH-e2NX8OZpDvM87wwvQpcErgmAvEkEeK0wUIqBMVZjcoQmZCYZllzJYzQBRTkWQr2forOUNgAFAzJBi-fQ-lw1H8Z72-HUm9yarjI5W5_b4Ctv81eIn5ULsUpDbyOONoVuOCzbrVnbajeYrs37yqRkU9oW8RydONMle_E7p2hxf_c2f8Qvrw9P89sX3FCuMiZsyYxrllwKZZx0zAlV89oBswq4mtVGEUYtMMcEGEGEkiuYCbAr3jAlazZFV2NuH8NusCnrTRiiLyc1lUzWICkRhaIj1cSQUrRO97F8HveagP6pT4_16VKfPtSnSZHYKKUC-7WNf9H_WN_fUnNr</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2737807216</pqid></control><display><type>article</type><title>Joint channel-spatial attention network for super-resolution image quality assessment</title><source>SpringerLink Journals - AutoHoldings</source><creator>Zhang, Tingyue ; Zhang, Kaibing ; Xiao, Chuan ; Xiong, Zenggang ; Lu, Jian</creator><creatorcontrib>Zhang, Tingyue ; Zhang, Kaibing ; Xiao, Chuan ; Xiong, Zenggang ; Lu, Jian</creatorcontrib><description>Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network model referred to as a joint channel-spatial attention network (JCSAN) for no-reference SR image quality assessment (NR-SRIQA). The JCSAN consists of a two-stream branch which learns the middle level features and the primary level features to jointly quantify the degradation of SR images. In the first middle level feature learning subnetwork, we embed a two-stage convolutional block attention module (CBAM) to capture discriminative perceptual feature maps through the channel and spatial attention in sequence. While the other shallow convolutional subnetwork is adopted to learn dense and primary level textural feature maps. In order to yield more accurate quality estimate to SR images, we integrate a unit aggregation gate (AG) module to dynamically distribute the channel-weights to the two feature maps from different branches. Extensive experimental results on two benchmark datasets verify the superiority of the proposed JCSAN-based quality metric in comparing with other state-of-the-art competitors.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-022-03338-1</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial Intelligence ; Computer Science ; Deep learning ; Feature maps ; Image enhancement ; Image quality ; Image resolution ; Machine learning ; Machines ; Manufacturing ; Mechanical Engineering ; Methods ; Modules ; Neural networks ; Processes ; Quality assessment</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2022-12, Vol.52 (15), p.17118-17132</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c249t-13b3afcb4769af7f3f69848f03e904958a9132e03f360a61697d0560ed4c39783</citedby><cites>FETCH-LOGICAL-c249t-13b3afcb4769af7f3f69848f03e904958a9132e03f360a61697d0560ed4c39783</cites><orcidid>0000-0002-3770-017X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-022-03338-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-022-03338-1$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27922,27923,41486,42555,51317</link.rule.ids></links><search><creatorcontrib>Zhang, Tingyue</creatorcontrib><creatorcontrib>Zhang, Kaibing</creatorcontrib><creatorcontrib>Xiao, Chuan</creatorcontrib><creatorcontrib>Xiong, Zenggang</creatorcontrib><creatorcontrib>Lu, Jian</creatorcontrib><title>Joint channel-spatial attention network for super-resolution image quality assessment</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network model referred to as a joint channel-spatial attention network (JCSAN) for no-reference SR image quality assessment (NR-SRIQA). The JCSAN consists of a two-stream branch which learns the middle level features and the primary level features to jointly quantify the degradation of SR images. In the first middle level feature learning subnetwork, we embed a two-stage convolutional block attention module (CBAM) to capture discriminative perceptual feature maps through the channel and spatial attention in sequence. While the other shallow convolutional subnetwork is adopted to learn dense and primary level textural feature maps. In order to yield more accurate quality estimate to SR images, we integrate a unit aggregation gate (AG) module to dynamically distribute the channel-weights to the two feature maps from different branches. Extensive experimental results on two benchmark datasets verify the superiority of the proposed JCSAN-based quality metric in comparing with other state-of-the-art competitors.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Deep learning</subject><subject>Feature maps</subject><subject>Image enhancement</subject><subject>Image quality</subject><subject>Image resolution</subject><subject>Machine learning</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Methods</subject><subject>Modules</subject><subject>Neural networks</subject><subject>Processes</subject><subject>Quality assessment</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kE1LAzEQhoMoWKt_wNOC5-jko8nmKMVPBC8WvIV0m9St22SbZJH-e2NX8OZpDvM87wwvQpcErgmAvEkEeK0wUIqBMVZjcoQmZCYZllzJYzQBRTkWQr2forOUNgAFAzJBi-fQ-lw1H8Z72-HUm9yarjI5W5_b4Ctv81eIn5ULsUpDbyOONoVuOCzbrVnbajeYrs37yqRkU9oW8RydONMle_E7p2hxf_c2f8Qvrw9P89sX3FCuMiZsyYxrllwKZZx0zAlV89oBswq4mtVGEUYtMMcEGEGEkiuYCbAr3jAlazZFV2NuH8NusCnrTRiiLyc1lUzWICkRhaIj1cSQUrRO97F8HveagP6pT4_16VKfPtSnSZHYKKUC-7WNf9H_WN_fUnNr</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Zhang, Tingyue</creator><creator>Zhang, Kaibing</creator><creator>Xiao, Chuan</creator><creator>Xiong, Zenggang</creator><creator>Lu, Jian</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-3770-017X</orcidid></search><sort><creationdate>20221201</creationdate><title>Joint channel-spatial attention network for super-resolution image quality assessment</title><author>Zhang, Tingyue ; Zhang, Kaibing ; Xiao, Chuan ; Xiong, Zenggang ; Lu, Jian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c249t-13b3afcb4769af7f3f69848f03e904958a9132e03f360a61697d0560ed4c39783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Deep learning</topic><topic>Feature maps</topic><topic>Image enhancement</topic><topic>Image quality</topic><topic>Image resolution</topic><topic>Machine learning</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Methods</topic><topic>Modules</topic><topic>Neural networks</topic><topic>Processes</topic><topic>Quality assessment</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Tingyue</creatorcontrib><creatorcontrib>Zhang, Kaibing</creatorcontrib><creatorcontrib>Xiao, Chuan</creatorcontrib><creatorcontrib>Xiong, Zenggang</creatorcontrib><creatorcontrib>Lu, Jian</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Tingyue</au><au>Zhang, Kaibing</au><au>Xiao, Chuan</au><au>Xiong, Zenggang</au><au>Lu, Jian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Joint channel-spatial attention network for super-resolution image quality assessment</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>52</volume><issue>15</issue><spage>17118</spage><epage>17132</epage><pages>17118-17132</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>Image super-resolution (SR) is an effective technique to enhance the quality of LR images. However, one of the most fundamental problems for SR is to evaluate the quality of resultant images for comparing and optimizing the performance of SR algorithms. In this paper, we propose a novel deep network model referred to as a joint channel-spatial attention network (JCSAN) for no-reference SR image quality assessment (NR-SRIQA). The JCSAN consists of a two-stream branch which learns the middle level features and the primary level features to jointly quantify the degradation of SR images. In the first middle level feature learning subnetwork, we embed a two-stage convolutional block attention module (CBAM) to capture discriminative perceptual feature maps through the channel and spatial attention in sequence. While the other shallow convolutional subnetwork is adopted to learn dense and primary level textural feature maps. In order to yield more accurate quality estimate to SR images, we integrate a unit aggregation gate (AG) module to dynamically distribute the channel-weights to the two feature maps from different branches. Extensive experimental results on two benchmark datasets verify the superiority of the proposed JCSAN-based quality metric in comparing with other state-of-the-art competitors.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-022-03338-1</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-3770-017X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2022-12, Vol.52 (15), p.17118-17132
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_2737807216
source SpringerLink Journals - AutoHoldings
subjects Algorithms
Artificial Intelligence
Computer Science
Deep learning
Feature maps
Image enhancement
Image quality
Image resolution
Machine learning
Machines
Manufacturing
Mechanical Engineering
Methods
Modules
Neural networks
Processes
Quality assessment
title Joint channel-spatial attention network for super-resolution image quality assessment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T13%3A13%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Joint%20channel-spatial%20attention%20network%20for%20super-resolution%20image%20quality%20assessment&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Zhang,%20Tingyue&rft.date=2022-12-01&rft.volume=52&rft.issue=15&rft.spage=17118&rft.epage=17132&rft.pages=17118-17132&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-022-03338-1&rft_dat=%3Cproquest_cross%3E2737807216%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2737807216&rft_id=info:pmid/&rfr_iscdi=true