Image super-resolution reconstruction based on feature map attention mechanism

To improve the issue of low-frequency and high-frequency components from feature maps being treated equally in existing image super-resolution reconstruction methods, the paper proposed an image super-resolution reconstruction method using attention mechanism with feature map to facilitate reconstru...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2021-07, Vol.51 (7), p.4367-4380
Hauptverfasser: Chen, Yuantao, Liu, Linwu, Phonevilay, Volachith, Gu, Ke, Xia, Runlong, Xie, Jingbo, Zhang, Qian, Yang, Kai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4380
container_issue 7
container_start_page 4367
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 51
creator Chen, Yuantao
Liu, Linwu
Phonevilay, Volachith
Gu, Ke
Xia, Runlong
Xie, Jingbo
Zhang, Qian
Yang, Kai
description To improve the issue of low-frequency and high-frequency components from feature maps being treated equally in existing image super-resolution reconstruction methods, the paper proposed an image super-resolution reconstruction method using attention mechanism with feature map to facilitate reconstruction from original low-resolution images to multi-scale super-resolution images. The proposed model consists of a feature extraction block, an information extraction block, and a reconstruction module. Firstly, the extraction block is used to extract useful features from low-resolution images, with multiple information extraction blocks being combined with the feature map attention mechanism and passed between feature channels. Secondly, the interdependence is used to adaptively adjust the channel characteristics to restore more details. Finally, the reconstruction module reforms different scales high-resolution images. The experimental results can demonstrate that the proposed method can effectively improve not only the visual effect of images but also the results on the Set5, Set14, Urban100, and Manga109. The results can demonstrate the proposed method has structurally similarity to the image reconstruction methods. Furthermore, the evaluating indicator of Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) has been improved to a certain degree, while the effectiveness of using feature map attention mechanism in image super-resolution reconstruction applications is useful and effective.
doi_str_mv 10.1007/s10489-020-02116-1
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2542532251</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2542532251</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-d56446f1cfaa0d17fb5843d66ab96bac0edcffd69d4ef3ed7ec3c1cb355bcbb73</originalsourceid><addsrcrecordid>eNp9kElLA0EQhRtRMEb_gKcBz629d-YowSUQ9KLgremlOiZkFrt7Dv57JxnBm4eiqqj3XsGH0DUlt5QQfZcpEYsaE0bGolRheoJmVGqOtaj1KZqRmgmsVP1xji5y3hFCOCd0hl5Wjd1AlYceEk6Qu_1Qtl1bJfBdm0sa_HF1NkOoxiGCLUOCqrF9ZUuB9nhuwH_adpubS3QW7T7D1W-fo_fHh7flM16_Pq2W92vsOa0LDlIJoSL10VoSqI5OLgQPSllXK2c9geBjDKoOAiKHoMFzT73jUjrvnOZzdDPl9qn7GiAXs-uG1I4vDZOCSc6YpKOKTSqfupwTRNOnbWPTt6HEHLiZiZsZuZkjN3Mw8cmUR3G7gfQX_Y_rB4vLcyI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2542532251</pqid></control><display><type>article</type><title>Image super-resolution reconstruction based on feature map attention mechanism</title><source>SpringerLink Journals - AutoHoldings</source><creator>Chen, Yuantao ; Liu, Linwu ; Phonevilay, Volachith ; Gu, Ke ; Xia, Runlong ; Xie, Jingbo ; Zhang, Qian ; Yang, Kai</creator><creatorcontrib>Chen, Yuantao ; Liu, Linwu ; Phonevilay, Volachith ; Gu, Ke ; Xia, Runlong ; Xie, Jingbo ; Zhang, Qian ; Yang, Kai</creatorcontrib><description>To improve the issue of low-frequency and high-frequency components from feature maps being treated equally in existing image super-resolution reconstruction methods, the paper proposed an image super-resolution reconstruction method using attention mechanism with feature map to facilitate reconstruction from original low-resolution images to multi-scale super-resolution images. The proposed model consists of a feature extraction block, an information extraction block, and a reconstruction module. Firstly, the extraction block is used to extract useful features from low-resolution images, with multiple information extraction blocks being combined with the feature map attention mechanism and passed between feature channels. Secondly, the interdependence is used to adaptively adjust the channel characteristics to restore more details. Finally, the reconstruction module reforms different scales high-resolution images. The experimental results can demonstrate that the proposed method can effectively improve not only the visual effect of images but also the results on the Set5, Set14, Urban100, and Manga109. The results can demonstrate the proposed method has structurally similarity to the image reconstruction methods. Furthermore, the evaluating indicator of Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) has been improved to a certain degree, while the effectiveness of using feature map attention mechanism in image super-resolution reconstruction applications is useful and effective.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-020-02116-1</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial Intelligence ; Computer Science ; Deep learning ; Feature extraction ; Feature maps ; Image reconstruction ; Image resolution ; Image restoration ; Information retrieval ; Machines ; Manufacturing ; Mapping ; Mechanical Engineering ; Modules ; Processes ; Signal to noise ratio ; Similarity ; Visual effects</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2021-07, Vol.51 (7), p.4367-4380</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-d56446f1cfaa0d17fb5843d66ab96bac0edcffd69d4ef3ed7ec3c1cb355bcbb73</citedby><cites>FETCH-LOGICAL-c319t-d56446f1cfaa0d17fb5843d66ab96bac0edcffd69d4ef3ed7ec3c1cb355bcbb73</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-020-02116-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-020-02116-1$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Chen, Yuantao</creatorcontrib><creatorcontrib>Liu, Linwu</creatorcontrib><creatorcontrib>Phonevilay, Volachith</creatorcontrib><creatorcontrib>Gu, Ke</creatorcontrib><creatorcontrib>Xia, Runlong</creatorcontrib><creatorcontrib>Xie, Jingbo</creatorcontrib><creatorcontrib>Zhang, Qian</creatorcontrib><creatorcontrib>Yang, Kai</creatorcontrib><title>Image super-resolution reconstruction based on feature map attention mechanism</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>To improve the issue of low-frequency and high-frequency components from feature maps being treated equally in existing image super-resolution reconstruction methods, the paper proposed an image super-resolution reconstruction method using attention mechanism with feature map to facilitate reconstruction from original low-resolution images to multi-scale super-resolution images. The proposed model consists of a feature extraction block, an information extraction block, and a reconstruction module. Firstly, the extraction block is used to extract useful features from low-resolution images, with multiple information extraction blocks being combined with the feature map attention mechanism and passed between feature channels. Secondly, the interdependence is used to adaptively adjust the channel characteristics to restore more details. Finally, the reconstruction module reforms different scales high-resolution images. The experimental results can demonstrate that the proposed method can effectively improve not only the visual effect of images but also the results on the Set5, Set14, Urban100, and Manga109. The results can demonstrate the proposed method has structurally similarity to the image reconstruction methods. Furthermore, the evaluating indicator of Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) has been improved to a certain degree, while the effectiveness of using feature map attention mechanism in image super-resolution reconstruction applications is useful and effective.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>Image reconstruction</subject><subject>Image resolution</subject><subject>Image restoration</subject><subject>Information retrieval</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mapping</subject><subject>Mechanical Engineering</subject><subject>Modules</subject><subject>Processes</subject><subject>Signal to noise ratio</subject><subject>Similarity</subject><subject>Visual effects</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp9kElLA0EQhRtRMEb_gKcBz629d-YowSUQ9KLgremlOiZkFrt7Dv57JxnBm4eiqqj3XsGH0DUlt5QQfZcpEYsaE0bGolRheoJmVGqOtaj1KZqRmgmsVP1xji5y3hFCOCd0hl5Wjd1AlYceEk6Qu_1Qtl1bJfBdm0sa_HF1NkOoxiGCLUOCqrF9ZUuB9nhuwH_adpubS3QW7T7D1W-fo_fHh7flM16_Pq2W92vsOa0LDlIJoSL10VoSqI5OLgQPSllXK2c9geBjDKoOAiKHoMFzT73jUjrvnOZzdDPl9qn7GiAXs-uG1I4vDZOCSc6YpKOKTSqfupwTRNOnbWPTt6HEHLiZiZsZuZkjN3Mw8cmUR3G7gfQX_Y_rB4vLcyI</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Chen, Yuantao</creator><creator>Liu, Linwu</creator><creator>Phonevilay, Volachith</creator><creator>Gu, Ke</creator><creator>Xia, Runlong</creator><creator>Xie, Jingbo</creator><creator>Zhang, Qian</creator><creator>Yang, Kai</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope></search><sort><creationdate>20210701</creationdate><title>Image super-resolution reconstruction based on feature map attention mechanism</title><author>Chen, Yuantao ; Liu, Linwu ; Phonevilay, Volachith ; Gu, Ke ; Xia, Runlong ; Xie, Jingbo ; Zhang, Qian ; Yang, Kai</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-d56446f1cfaa0d17fb5843d66ab96bac0edcffd69d4ef3ed7ec3c1cb355bcbb73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>Image reconstruction</topic><topic>Image resolution</topic><topic>Image restoration</topic><topic>Information retrieval</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mapping</topic><topic>Mechanical Engineering</topic><topic>Modules</topic><topic>Processes</topic><topic>Signal to noise ratio</topic><topic>Similarity</topic><topic>Visual effects</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Yuantao</creatorcontrib><creatorcontrib>Liu, Linwu</creatorcontrib><creatorcontrib>Phonevilay, Volachith</creatorcontrib><creatorcontrib>Gu, Ke</creatorcontrib><creatorcontrib>Xia, Runlong</creatorcontrib><creatorcontrib>Xie, Jingbo</creatorcontrib><creatorcontrib>Zhang, Qian</creatorcontrib><creatorcontrib>Yang, Kai</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Yuantao</au><au>Liu, Linwu</au><au>Phonevilay, Volachith</au><au>Gu, Ke</au><au>Xia, Runlong</au><au>Xie, Jingbo</au><au>Zhang, Qian</au><au>Yang, Kai</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Image super-resolution reconstruction based on feature map attention mechanism</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>51</volume><issue>7</issue><spage>4367</spage><epage>4380</epage><pages>4367-4380</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>To improve the issue of low-frequency and high-frequency components from feature maps being treated equally in existing image super-resolution reconstruction methods, the paper proposed an image super-resolution reconstruction method using attention mechanism with feature map to facilitate reconstruction from original low-resolution images to multi-scale super-resolution images. The proposed model consists of a feature extraction block, an information extraction block, and a reconstruction module. Firstly, the extraction block is used to extract useful features from low-resolution images, with multiple information extraction blocks being combined with the feature map attention mechanism and passed between feature channels. Secondly, the interdependence is used to adaptively adjust the channel characteristics to restore more details. Finally, the reconstruction module reforms different scales high-resolution images. The experimental results can demonstrate that the proposed method can effectively improve not only the visual effect of images but also the results on the Set5, Set14, Urban100, and Manga109. The results can demonstrate the proposed method has structurally similarity to the image reconstruction methods. Furthermore, the evaluating indicator of Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) has been improved to a certain degree, while the effectiveness of using feature map attention mechanism in image super-resolution reconstruction applications is useful and effective.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-020-02116-1</doi><tpages>14</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2021-07, Vol.51 (7), p.4367-4380
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_2542532251
source SpringerLink Journals - AutoHoldings
subjects Algorithms
Artificial Intelligence
Computer Science
Deep learning
Feature extraction
Feature maps
Image reconstruction
Image resolution
Image restoration
Information retrieval
Machines
Manufacturing
Mapping
Mechanical Engineering
Modules
Processes
Signal to noise ratio
Similarity
Visual effects
title Image super-resolution reconstruction based on feature map attention mechanism
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T09%3A16%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Image%20super-resolution%20reconstruction%20based%20on%20feature%20map%20attention%20mechanism&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Chen,%20Yuantao&rft.date=2021-07-01&rft.volume=51&rft.issue=7&rft.spage=4367&rft.epage=4380&rft.pages=4367-4380&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-020-02116-1&rft_dat=%3Cproquest_cross%3E2542532251%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2542532251&rft_id=info:pmid/&rfr_iscdi=true