Single Image Super Resolution via Multi-attention Fusion Recurrent Network
Deep convolutional neural networks have significantly enhanced the performance of single image super-resolution in recent years. However, the majority of the proposed networks are single-channel, making it challenging to fully exploit the advantages of neural networks in feature extraction. This pap...
Gespeichert in:
Veröffentlicht in: | IEEE access 2023-01, Vol.11, p.1-1 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | IEEE access |
container_volume | 11 |
creator | Kou, Qiqi Cheng, Deqiang Zhang, Haoxiang Liu, Jingjing Guo, Xin Jiang, He |
description | Deep convolutional neural networks have significantly enhanced the performance of single image super-resolution in recent years. However, the majority of the proposed networks are single-channel, making it challenging to fully exploit the advantages of neural networks in feature extraction. This paper proposes a Multi-attention Fusion Recurrent Network (MFRN), which is a multiplexing architecture-based network. Firstly, the algorithm reuses the feature extraction part to construct the recurrent network. This technology reduces the number of network parameters, accelerates training, and captures rich features simultaneously. Secondly, a multiplexing-based structure is employed to obtain deep information features, which alleviates the issue of feature loss during transmission. Thirdly, an attention fusion mechanism is incorporated into the neural network to fuse channel attention and pixel attention information. This fusion mechanism effectively enhances the expressive power of each layer of the neural network. Compared with other algorithms, our MFRN not only exhibits superior visual performance but also achieves favorable results in objective evaluations. It generates images with sharper structure and texture details and achieves higher scores in quantitative tests such as image quality assessment. |
doi_str_mv | 10.1109/ACCESS.2023.3314196 |
format | Article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10247056</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10247056</ieee_id><doaj_id>oai_doaj_org_article_499859847b9040c8af7195ef3fd148eb</doaj_id><sourcerecordid>2865090167</sourcerecordid><originalsourceid>FETCH-LOGICAL-c409t-8e22adc384d86cdb88fa984fc841c6d939bbddb77c4203111a553619b5726783</originalsourceid><addsrcrecordid>eNpNUU1PwkAUbIwmEuQX6KGJ5-J-dT-OpAHFoCbAfbPdvpJiYXHbavz3LpQY3mVeJjPzXjJRdI_RGGOkniZZNl2txgQROqYUM6z4VTQgmKuEppRfX-y30ahptiiMDFQqBtHrqtpvaojnO7OBeNUdwMdLaFzdtZXbx9-Vid-6uq0S07awP3GzrjnCEmznfeDid2h_nP-8i25KUzcwOuMwWs-m6-wlWXw8z7PJIrEMqTaRQIgpLJWskNwWuZSlUZKVVjJseaGoyvOiyIWwjCCKMTZp-ByrPBWEC0mH0byPLZzZ6oOvdsb_amcqfSKc32jj28rWoJlSMg3ZIleIIStNKbBKoaRlgZmEPGQ99lkH7746aFq9dZ3fh-81kTxFCmEugor2Kutd03go_69ipI8V6L4CfaxAnysIrofeVQHAhYMwgVJO_wDKGoFK</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2865090167</pqid></control><display><type>article</type><title>Single Image Super Resolution via Multi-attention Fusion Recurrent Network</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Kou, Qiqi ; Cheng, Deqiang ; Zhang, Haoxiang ; Liu, Jingjing ; Guo, Xin ; Jiang, He</creator><creatorcontrib>Kou, Qiqi ; Cheng, Deqiang ; Zhang, Haoxiang ; Liu, Jingjing ; Guo, Xin ; Jiang, He</creatorcontrib><description>Deep convolutional neural networks have significantly enhanced the performance of single image super-resolution in recent years. However, the majority of the proposed networks are single-channel, making it challenging to fully exploit the advantages of neural networks in feature extraction. This paper proposes a Multi-attention Fusion Recurrent Network (MFRN), which is a multiplexing architecture-based network. Firstly, the algorithm reuses the feature extraction part to construct the recurrent network. This technology reduces the number of network parameters, accelerates training, and captures rich features simultaneously. Secondly, a multiplexing-based structure is employed to obtain deep information features, which alleviates the issue of feature loss during transmission. Thirdly, an attention fusion mechanism is incorporated into the neural network to fuse channel attention and pixel attention information. This fusion mechanism effectively enhances the expressive power of each layer of the neural network. Compared with other algorithms, our MFRN not only exhibits superior visual performance but also achieves favorable results in objective evaluations. It generates images with sharper structure and texture details and achieves higher scores in quantitative tests such as image quality assessment.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2023.3314196</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Algorithms ; Artificial neural networks ; Attention fusion mechanism ; Computer architecture ; Convolutional neural networks ; Feature extraction ; Image enhancement ; Image quality ; Image reconstruction ; Image resolution ; Multiplexing ; Multiplexing-based ; Neural networks ; Quality assessment ; Recurrent network ; Recurrent neural networks ; Super resolution ; Superresolution ; Training</subject><ispartof>IEEE access, 2023-01, Vol.11, p.1-1</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c409t-8e22adc384d86cdb88fa984fc841c6d939bbddb77c4203111a553619b5726783</citedby><cites>FETCH-LOGICAL-c409t-8e22adc384d86cdb88fa984fc841c6d939bbddb77c4203111a553619b5726783</cites><orcidid>0000-0001-8831-1994 ; 0000-0003-2873-2636 ; 0000-0002-3345-9665</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10247056$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>315,781,785,865,2103,27638,27929,27930,54938</link.rule.ids></links><search><creatorcontrib>Kou, Qiqi</creatorcontrib><creatorcontrib>Cheng, Deqiang</creatorcontrib><creatorcontrib>Zhang, Haoxiang</creatorcontrib><creatorcontrib>Liu, Jingjing</creatorcontrib><creatorcontrib>Guo, Xin</creatorcontrib><creatorcontrib>Jiang, He</creatorcontrib><title>Single Image Super Resolution via Multi-attention Fusion Recurrent Network</title><title>IEEE access</title><addtitle>Access</addtitle><description>Deep convolutional neural networks have significantly enhanced the performance of single image super-resolution in recent years. However, the majority of the proposed networks are single-channel, making it challenging to fully exploit the advantages of neural networks in feature extraction. This paper proposes a Multi-attention Fusion Recurrent Network (MFRN), which is a multiplexing architecture-based network. Firstly, the algorithm reuses the feature extraction part to construct the recurrent network. This technology reduces the number of network parameters, accelerates training, and captures rich features simultaneously. Secondly, a multiplexing-based structure is employed to obtain deep information features, which alleviates the issue of feature loss during transmission. Thirdly, an attention fusion mechanism is incorporated into the neural network to fuse channel attention and pixel attention information. This fusion mechanism effectively enhances the expressive power of each layer of the neural network. Compared with other algorithms, our MFRN not only exhibits superior visual performance but also achieves favorable results in objective evaluations. It generates images with sharper structure and texture details and achieves higher scores in quantitative tests such as image quality assessment.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Attention fusion mechanism</subject><subject>Computer architecture</subject><subject>Convolutional neural networks</subject><subject>Feature extraction</subject><subject>Image enhancement</subject><subject>Image quality</subject><subject>Image reconstruction</subject><subject>Image resolution</subject><subject>Multiplexing</subject><subject>Multiplexing-based</subject><subject>Neural networks</subject><subject>Quality assessment</subject><subject>Recurrent network</subject><subject>Recurrent neural networks</subject><subject>Super resolution</subject><subject>Superresolution</subject><subject>Training</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUU1PwkAUbIwmEuQX6KGJ5-J-dT-OpAHFoCbAfbPdvpJiYXHbavz3LpQY3mVeJjPzXjJRdI_RGGOkniZZNl2txgQROqYUM6z4VTQgmKuEppRfX-y30ahptiiMDFQqBtHrqtpvaojnO7OBeNUdwMdLaFzdtZXbx9-Vid-6uq0S07awP3GzrjnCEmznfeDid2h_nP-8i25KUzcwOuMwWs-m6-wlWXw8z7PJIrEMqTaRQIgpLJWskNwWuZSlUZKVVjJseaGoyvOiyIWwjCCKMTZp-ByrPBWEC0mH0byPLZzZ6oOvdsb_amcqfSKc32jj28rWoJlSMg3ZIleIIStNKbBKoaRlgZmEPGQ99lkH7746aFq9dZ3fh-81kTxFCmEugor2Kutd03go_69ipI8V6L4CfaxAnysIrofeVQHAhYMwgVJO_wDKGoFK</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Kou, Qiqi</creator><creator>Cheng, Deqiang</creator><creator>Zhang, Haoxiang</creator><creator>Liu, Jingjing</creator><creator>Guo, Xin</creator><creator>Jiang, He</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-8831-1994</orcidid><orcidid>https://orcid.org/0000-0003-2873-2636</orcidid><orcidid>https://orcid.org/0000-0002-3345-9665</orcidid></search><sort><creationdate>20230101</creationdate><title>Single Image Super Resolution via Multi-attention Fusion Recurrent Network</title><author>Kou, Qiqi ; Cheng, Deqiang ; Zhang, Haoxiang ; Liu, Jingjing ; Guo, Xin ; Jiang, He</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c409t-8e22adc384d86cdb88fa984fc841c6d939bbddb77c4203111a553619b5726783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Attention fusion mechanism</topic><topic>Computer architecture</topic><topic>Convolutional neural networks</topic><topic>Feature extraction</topic><topic>Image enhancement</topic><topic>Image quality</topic><topic>Image reconstruction</topic><topic>Image resolution</topic><topic>Multiplexing</topic><topic>Multiplexing-based</topic><topic>Neural networks</topic><topic>Quality assessment</topic><topic>Recurrent network</topic><topic>Recurrent neural networks</topic><topic>Super resolution</topic><topic>Superresolution</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kou, Qiqi</creatorcontrib><creatorcontrib>Cheng, Deqiang</creatorcontrib><creatorcontrib>Zhang, Haoxiang</creatorcontrib><creatorcontrib>Liu, Jingjing</creatorcontrib><creatorcontrib>Guo, Xin</creatorcontrib><creatorcontrib>Jiang, He</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kou, Qiqi</au><au>Cheng, Deqiang</au><au>Zhang, Haoxiang</au><au>Liu, Jingjing</au><au>Guo, Xin</au><au>Jiang, He</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Single Image Super Resolution via Multi-attention Fusion Recurrent Network</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>11</volume><spage>1</spage><epage>1</epage><pages>1-1</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Deep convolutional neural networks have significantly enhanced the performance of single image super-resolution in recent years. However, the majority of the proposed networks are single-channel, making it challenging to fully exploit the advantages of neural networks in feature extraction. This paper proposes a Multi-attention Fusion Recurrent Network (MFRN), which is a multiplexing architecture-based network. Firstly, the algorithm reuses the feature extraction part to construct the recurrent network. This technology reduces the number of network parameters, accelerates training, and captures rich features simultaneously. Secondly, a multiplexing-based structure is employed to obtain deep information features, which alleviates the issue of feature loss during transmission. Thirdly, an attention fusion mechanism is incorporated into the neural network to fuse channel attention and pixel attention information. This fusion mechanism effectively enhances the expressive power of each layer of the neural network. Compared with other algorithms, our MFRN not only exhibits superior visual performance but also achieves favorable results in objective evaluations. It generates images with sharper structure and texture details and achieves higher scores in quantitative tests such as image quality assessment.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2023.3314196</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-8831-1994</orcidid><orcidid>https://orcid.org/0000-0003-2873-2636</orcidid><orcidid>https://orcid.org/0000-0002-3345-9665</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2023-01, Vol.11, p.1-1 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_ieee_primary_10247056 |
source | IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Algorithms Artificial neural networks Attention fusion mechanism Computer architecture Convolutional neural networks Feature extraction Image enhancement Image quality Image reconstruction Image resolution Multiplexing Multiplexing-based Neural networks Quality assessment Recurrent network Recurrent neural networks Super resolution Superresolution Training |
title | Single Image Super Resolution via Multi-attention Fusion Recurrent Network |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T11%3A48%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Single%20Image%20Super%20Resolution%20via%20Multi-attention%20Fusion%20Recurrent%20Network&rft.jtitle=IEEE%20access&rft.au=Kou,%20Qiqi&rft.date=2023-01-01&rft.volume=11&rft.spage=1&rft.epage=1&rft.pages=1-1&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2023.3314196&rft_dat=%3Cproquest_ieee_%3E2865090167%3C/proquest_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2865090167&rft_id=info:pmid/&rft_ieee_id=10247056&rft_doaj_id=oai_doaj_org_article_499859847b9040c8af7195ef3fd148eb&rfr_iscdi=true |