SCGA‐Net: Skip Connections Global Attention Network for Image Restoration
Deep convolutional neural networks (DCNN) have shown their advantages in the image restoration tasks. But most existing DCNN‐based methods still suffer from the residual corruptions and coarse textures. In this paper, we propose a general framework “Skip Connections Global Attention Network” to focu...
Gespeichert in:
Veröffentlicht in: | Computer graphics forum 2020-10, Vol.39 (7), p.507-518 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 518 |
---|---|
container_issue | 7 |
container_start_page | 507 |
container_title | Computer graphics forum |
container_volume | 39 |
creator | Ren, Dongdong Li, Jinbao Han, Meng Shu, Minglei |
description | Deep convolutional neural networks (DCNN) have shown their advantages in the image restoration tasks. But most existing DCNN‐based methods still suffer from the residual corruptions and coarse textures. In this paper, we propose a general framework “Skip Connections Global Attention Network” to focus on the semantics delivery from shallow layers to deep layers for low‐level vision tasks including image dehazing, image denoising, and low‐light image enhancement. First of all, by applying dense dilated convolution and multi‐scale feature fusion mechanism, we establish a novel encoder‐decoder network framework to aggregate large‐scale spatial context and enhance feature reuse. Secondly, the solution we proposed for skipping connection uses attention mechanism to constraint information, thereby enhancing the high‐frequency details of feature maps and suppressing the output of corruptions. Finally, we also present a novel attention module dubbed global constraint attention, which could effectively captures the relationship between pixels on the entire feature maps, to obtain the subtle differences among pixels and produce an overall optimal 3D attention maps. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state‐of‐the‐art methods in image dehazing, image denoising, and low‐light image enhancement. |
doi_str_mv | 10.1111/cgf.14163 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2463486620</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2463486620</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3363-b0e63c23625f650b1b637ba847a9518755cbf9348f615a7d0980c8f8d5e202443</originalsourceid><addsrcrecordid>eNp1kE1OwzAQhS0EEqWw4AaWWLFIa8e_YVdFNFRUIFFYW05qV2lDXOxUVXccgTNyElzCltnMaOZ786QHwDVGIxxrXK3sCFPMyQkYYMpFIjnLTsEA4TgLxNg5uAhhjRCigrMBeFzkxeT78-vJdHdwsam3MHdta6qudm2AReNK3cBJ15n2uIER2zu_gdZ5OHvXKwNfTOic18frJTizugnm6q8Pwdv0_jV_SObPxSyfzJOKEE6SEhlOqpTwlFnOUIlLTkSpJRU6Y1gKxqrSZoRKyzHTYokyiSpp5ZKZFKWUkiG46f9uvfvYRX-1djvfRkuVUh6FnKcoUrc9VXkXgjdWbX39rv1BYaSOWamYlfrNKrLjnt3XjTn8D6q8mPaKHz55aWw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2463486620</pqid></control><display><type>article</type><title>SCGA‐Net: Skip Connections Global Attention Network for Image Restoration</title><source>EBSCOhost Business Source Complete</source><source>Wiley Online Library All Journals</source><creator>Ren, Dongdong ; Li, Jinbao ; Han, Meng ; Shu, Minglei</creator><creatorcontrib>Ren, Dongdong ; Li, Jinbao ; Han, Meng ; Shu, Minglei</creatorcontrib><description>Deep convolutional neural networks (DCNN) have shown their advantages in the image restoration tasks. But most existing DCNN‐based methods still suffer from the residual corruptions and coarse textures. In this paper, we propose a general framework “Skip Connections Global Attention Network” to focus on the semantics delivery from shallow layers to deep layers for low‐level vision tasks including image dehazing, image denoising, and low‐light image enhancement. First of all, by applying dense dilated convolution and multi‐scale feature fusion mechanism, we establish a novel encoder‐decoder network framework to aggregate large‐scale spatial context and enhance feature reuse. Secondly, the solution we proposed for skipping connection uses attention mechanism to constraint information, thereby enhancing the high‐frequency details of feature maps and suppressing the output of corruptions. Finally, we also present a novel attention module dubbed global constraint attention, which could effectively captures the relationship between pixels on the entire feature maps, to obtain the subtle differences among pixels and produce an overall optimal 3D attention maps. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state‐of‐the‐art methods in image dehazing, image denoising, and low‐light image enhancement.</description><identifier>ISSN: 0167-7055</identifier><identifier>EISSN: 1467-8659</identifier><identifier>DOI: 10.1111/cgf.14163</identifier><language>eng</language><publisher>Oxford: Blackwell Publishing Ltd</publisher><subject>Artificial neural networks ; CCS Concepts ; Coders ; Computing methodologies $Ar Image processing ; Convolution ; Feature maps ; Image enhancement ; Image restoration ; Noise reduction ; Pixels ; Semantics</subject><ispartof>Computer graphics forum, 2020-10, Vol.39 (7), p.507-518</ispartof><rights>2020 The Author(s) Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.</rights><rights>2020 The Eurographics Association and John Wiley & Sons Ltd.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3363-b0e63c23625f650b1b637ba847a9518755cbf9348f615a7d0980c8f8d5e202443</citedby><cites>FETCH-LOGICAL-c3363-b0e63c23625f650b1b637ba847a9518755cbf9348f615a7d0980c8f8d5e202443</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1111%2Fcgf.14163$$EPDF$$P50$$Gwiley$$H</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1111%2Fcgf.14163$$EHTML$$P50$$Gwiley$$H</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,45574,45575</link.rule.ids></links><search><creatorcontrib>Ren, Dongdong</creatorcontrib><creatorcontrib>Li, Jinbao</creatorcontrib><creatorcontrib>Han, Meng</creatorcontrib><creatorcontrib>Shu, Minglei</creatorcontrib><title>SCGA‐Net: Skip Connections Global Attention Network for Image Restoration</title><title>Computer graphics forum</title><description>Deep convolutional neural networks (DCNN) have shown their advantages in the image restoration tasks. But most existing DCNN‐based methods still suffer from the residual corruptions and coarse textures. In this paper, we propose a general framework “Skip Connections Global Attention Network” to focus on the semantics delivery from shallow layers to deep layers for low‐level vision tasks including image dehazing, image denoising, and low‐light image enhancement. First of all, by applying dense dilated convolution and multi‐scale feature fusion mechanism, we establish a novel encoder‐decoder network framework to aggregate large‐scale spatial context and enhance feature reuse. Secondly, the solution we proposed for skipping connection uses attention mechanism to constraint information, thereby enhancing the high‐frequency details of feature maps and suppressing the output of corruptions. Finally, we also present a novel attention module dubbed global constraint attention, which could effectively captures the relationship between pixels on the entire feature maps, to obtain the subtle differences among pixels and produce an overall optimal 3D attention maps. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state‐of‐the‐art methods in image dehazing, image denoising, and low‐light image enhancement.</description><subject>Artificial neural networks</subject><subject>CCS Concepts</subject><subject>Coders</subject><subject>Computing methodologies $Ar Image processing</subject><subject>Convolution</subject><subject>Feature maps</subject><subject>Image enhancement</subject><subject>Image restoration</subject><subject>Noise reduction</subject><subject>Pixels</subject><subject>Semantics</subject><issn>0167-7055</issn><issn>1467-8659</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp1kE1OwzAQhS0EEqWw4AaWWLFIa8e_YVdFNFRUIFFYW05qV2lDXOxUVXccgTNyElzCltnMaOZ786QHwDVGIxxrXK3sCFPMyQkYYMpFIjnLTsEA4TgLxNg5uAhhjRCigrMBeFzkxeT78-vJdHdwsam3MHdta6qudm2AReNK3cBJ15n2uIER2zu_gdZ5OHvXKwNfTOic18frJTizugnm6q8Pwdv0_jV_SObPxSyfzJOKEE6SEhlOqpTwlFnOUIlLTkSpJRU6Y1gKxqrSZoRKyzHTYokyiSpp5ZKZFKWUkiG46f9uvfvYRX-1djvfRkuVUh6FnKcoUrc9VXkXgjdWbX39rv1BYaSOWamYlfrNKrLjnt3XjTn8D6q8mPaKHz55aWw</recordid><startdate>202010</startdate><enddate>202010</enddate><creator>Ren, Dongdong</creator><creator>Li, Jinbao</creator><creator>Han, Meng</creator><creator>Shu, Minglei</creator><general>Blackwell Publishing Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202010</creationdate><title>SCGA‐Net: Skip Connections Global Attention Network for Image Restoration</title><author>Ren, Dongdong ; Li, Jinbao ; Han, Meng ; Shu, Minglei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3363-b0e63c23625f650b1b637ba847a9518755cbf9348f615a7d0980c8f8d5e202443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>CCS Concepts</topic><topic>Coders</topic><topic>Computing methodologies $Ar Image processing</topic><topic>Convolution</topic><topic>Feature maps</topic><topic>Image enhancement</topic><topic>Image restoration</topic><topic>Noise reduction</topic><topic>Pixels</topic><topic>Semantics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ren, Dongdong</creatorcontrib><creatorcontrib>Li, Jinbao</creatorcontrib><creatorcontrib>Han, Meng</creatorcontrib><creatorcontrib>Shu, Minglei</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Computer graphics forum</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ren, Dongdong</au><au>Li, Jinbao</au><au>Han, Meng</au><au>Shu, Minglei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SCGA‐Net: Skip Connections Global Attention Network for Image Restoration</atitle><jtitle>Computer graphics forum</jtitle><date>2020-10</date><risdate>2020</risdate><volume>39</volume><issue>7</issue><spage>507</spage><epage>518</epage><pages>507-518</pages><issn>0167-7055</issn><eissn>1467-8659</eissn><abstract>Deep convolutional neural networks (DCNN) have shown their advantages in the image restoration tasks. But most existing DCNN‐based methods still suffer from the residual corruptions and coarse textures. In this paper, we propose a general framework “Skip Connections Global Attention Network” to focus on the semantics delivery from shallow layers to deep layers for low‐level vision tasks including image dehazing, image denoising, and low‐light image enhancement. First of all, by applying dense dilated convolution and multi‐scale feature fusion mechanism, we establish a novel encoder‐decoder network framework to aggregate large‐scale spatial context and enhance feature reuse. Secondly, the solution we proposed for skipping connection uses attention mechanism to constraint information, thereby enhancing the high‐frequency details of feature maps and suppressing the output of corruptions. Finally, we also present a novel attention module dubbed global constraint attention, which could effectively captures the relationship between pixels on the entire feature maps, to obtain the subtle differences among pixels and produce an overall optimal 3D attention maps. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state‐of‐the‐art methods in image dehazing, image denoising, and low‐light image enhancement.</abstract><cop>Oxford</cop><pub>Blackwell Publishing Ltd</pub><doi>10.1111/cgf.14163</doi><tpages>12</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0167-7055 |
ispartof | Computer graphics forum, 2020-10, Vol.39 (7), p.507-518 |
issn | 0167-7055 1467-8659 |
language | eng |
recordid | cdi_proquest_journals_2463486620 |
source | EBSCOhost Business Source Complete; Wiley Online Library All Journals |
subjects | Artificial neural networks CCS Concepts Coders Computing methodologies $Ar Image processing Convolution Feature maps Image enhancement Image restoration Noise reduction Pixels Semantics |
title | SCGA‐Net: Skip Connections Global Attention Network for Image Restoration |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T04%3A24%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SCGA%E2%80%90Net:%20Skip%20Connections%20Global%20Attention%20Network%20for%20Image%20Restoration&rft.jtitle=Computer%20graphics%20forum&rft.au=Ren,%20Dongdong&rft.date=2020-10&rft.volume=39&rft.issue=7&rft.spage=507&rft.epage=518&rft.pages=507-518&rft.issn=0167-7055&rft.eissn=1467-8659&rft_id=info:doi/10.1111/cgf.14163&rft_dat=%3Cproquest_cross%3E2463486620%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2463486620&rft_id=info:pmid/&rfr_iscdi=true |