Multi-scale Generative Adversarial Deblurring Network with Gradient Guidance
With regards to the lack of crisp edges and a poor recovery of high frequency information such as details in deblurred motion pictures, this research proposes a multi-scale adversarial deblurring network with gradient guidance (MADN). The algorithm uses the classical generative adversarial network (...
Gespeichert in:
Veröffentlicht in: | Wangji Wanglu Jishu Xuekan = Journal of Internet Technology 2023-03, Vol.24 (2), p.243-255 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 255 |
---|---|
container_issue | 2 |
container_start_page | 243 |
container_title | Wangji Wanglu Jishu Xuekan = Journal of Internet Technology |
container_volume | 24 |
creator | Jinxiu Zhu, Jinxiu Zhu Jinxiu Zhu, Xue Xu Xue Xu, Chang Choi Chang Choi, Xin Su |
description | With regards to the lack of crisp edges and a poor recovery of high frequency information such as details in deblurred motion pictures, this research proposes a multi-scale adversarial deblurring network with gradient guidance (MADN). The algorithm uses the classical generative adversarial network (GAN) framework, consisting of a generator and a discriminator. The generator includes a multi-scale convolutional network and a gradient feature extraction network. The multi-scale convolutional network extracts image features at different scales with a nested connection residual codec structure to improve the image edge structure recovery and to increase the perceptual field. This gradient network incorporates with intermediate scale features to extract the gradient features of blurred images to obtain their high frequency information. The generator combines the gradient and multiscale features to recover the remaining high-frequency information in a deblurred image. The loss function of MADN is formed in this research combining adversarial loss, pixel L2-norm loss and mean absolute error. Compared to those experimental results obtained from current deblurring algorithms, our experimental results indicate visually clearer images retaining more information such as edges and details. This MADN algorithm enhances the peak signal-to-noise ratio by an average of 3.32dB and the structural similarity by an average of 0.053. |
doi_str_mv | 10.53106/160792642023032402003 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2791347629</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2791347629</sourcerecordid><originalsourceid>FETCH-LOGICAL-c278t-526a005be4af71fe8406a2a19cdfda0a6bc727c3791d9f6468fc4f9420ab47383</originalsourceid><addsrcrecordid>eNptUMtOwzAQtBBIVKW_gCxxDqwfsZNjVSAgFbjAOdo4NhhCUmynFX9PChw4cNrRzuysZgg5ZXCeCwbqginQJVeSAxcguAQOIA7IbL_P9sThH3xMFjH6BoCznAvOZmR9N3bJZ9FgZ2llexsw-a2ly3ZrQ8TgsaOXtunGEHz_TO9t2g3hje58eqFVwNbbPtFq9C32xp6QI4ddtIvfOSdP11ePq5ts_VDdrpbrzHBdpCznCgHyxkp0mjlbSFDIkZWmdS0CqsZoro3QJWtLp6QqnJGunCJiI7UoxJyc_fhuwvAx2pjq12EM_fSy5tORkFrxclKpH5UJQ4zBunoT_DuGz5pB_V1e_W954guGk2FH</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2791347629</pqid></control><display><type>article</type><title>Multi-scale Generative Adversarial Deblurring Network with Gradient Guidance</title><source>Alma/SFX Local Collection</source><creator>Jinxiu Zhu, Jinxiu Zhu ; Jinxiu Zhu, Xue Xu ; Xue Xu, Chang Choi ; Chang Choi, Xin Su</creator><creatorcontrib>Jinxiu Zhu, Jinxiu Zhu ; Jinxiu Zhu, Xue Xu ; Xue Xu, Chang Choi ; Chang Choi, Xin Su</creatorcontrib><description><>With regards to the lack of crisp edges and a poor recovery of high frequency information such as details in deblurred motion pictures, this research proposes a multi-scale adversarial deblurring network with gradient guidance (MADN). The algorithm uses the classical generative adversarial network (GAN) framework, consisting of a generator and a discriminator. The generator includes a multi-scale convolutional network and a gradient feature extraction network. The multi-scale convolutional network extracts image features at different scales with a nested connection residual codec structure to improve the image edge structure recovery and to increase the perceptual field. This gradient network incorporates with intermediate scale features to extract the gradient features of blurred images to obtain their high frequency information. The generator combines the gradient and multiscale features to recover the remaining high-frequency information in a deblurred image. The loss function of MADN is formed in this research combining adversarial loss, pixel L2-norm loss and mean absolute error. Compared to those experimental results obtained from current deblurring algorithms, our experimental results indicate visually clearer images retaining more information such as edges and details. This MADN algorithm enhances the peak signal-to-noise ratio by an average of 3.32dB and the structural similarity by an average of 0.053.<></description><identifier>ISSN: 1607-9264</identifier><identifier>EISSN: 1607-9264</identifier><identifier>EISSN: 2079-4029</identifier><identifier>DOI: 10.53106/160792642023032402003</identifier><language>eng</language><publisher>Hualien: National Dong Hwa University, Computer Center</publisher><subject>Algorithms ; Codec ; Feature extraction ; Generative adversarial networks ; High frequencies ; Recovery ; Signal to noise ratio</subject><ispartof>Wangji Wanglu Jishu Xuekan = Journal of Internet Technology, 2023-03, Vol.24 (2), p.243-255</ispartof><rights>Copyright National Dong Hwa University, Computer Center 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Jinxiu Zhu, Jinxiu Zhu</creatorcontrib><creatorcontrib>Jinxiu Zhu, Xue Xu</creatorcontrib><creatorcontrib>Xue Xu, Chang Choi</creatorcontrib><creatorcontrib>Chang Choi, Xin Su</creatorcontrib><title>Multi-scale Generative Adversarial Deblurring Network with Gradient Guidance</title><title>Wangji Wanglu Jishu Xuekan = Journal of Internet Technology</title><description><>With regards to the lack of crisp edges and a poor recovery of high frequency information such as details in deblurred motion pictures, this research proposes a multi-scale adversarial deblurring network with gradient guidance (MADN). The algorithm uses the classical generative adversarial network (GAN) framework, consisting of a generator and a discriminator. The generator includes a multi-scale convolutional network and a gradient feature extraction network. The multi-scale convolutional network extracts image features at different scales with a nested connection residual codec structure to improve the image edge structure recovery and to increase the perceptual field. This gradient network incorporates with intermediate scale features to extract the gradient features of blurred images to obtain their high frequency information. The generator combines the gradient and multiscale features to recover the remaining high-frequency information in a deblurred image. The loss function of MADN is formed in this research combining adversarial loss, pixel L2-norm loss and mean absolute error. Compared to those experimental results obtained from current deblurring algorithms, our experimental results indicate visually clearer images retaining more information such as edges and details. This MADN algorithm enhances the peak signal-to-noise ratio by an average of 3.32dB and the structural similarity by an average of 0.053.<></description><subject>Algorithms</subject><subject>Codec</subject><subject>Feature extraction</subject><subject>Generative adversarial networks</subject><subject>High frequencies</subject><subject>Recovery</subject><subject>Signal to noise ratio</subject><issn>1607-9264</issn><issn>1607-9264</issn><issn>2079-4029</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNptUMtOwzAQtBBIVKW_gCxxDqwfsZNjVSAgFbjAOdo4NhhCUmynFX9PChw4cNrRzuysZgg5ZXCeCwbqginQJVeSAxcguAQOIA7IbL_P9sThH3xMFjH6BoCznAvOZmR9N3bJZ9FgZ2llexsw-a2ly3ZrQ8TgsaOXtunGEHz_TO9t2g3hje58eqFVwNbbPtFq9C32xp6QI4ddtIvfOSdP11ePq5ts_VDdrpbrzHBdpCznCgHyxkp0mjlbSFDIkZWmdS0CqsZoro3QJWtLp6QqnJGunCJiI7UoxJyc_fhuwvAx2pjq12EM_fSy5tORkFrxclKpH5UJQ4zBunoT_DuGz5pB_V1e_W954guGk2FH</recordid><startdate>20230301</startdate><enddate>20230301</enddate><creator>Jinxiu Zhu, Jinxiu Zhu</creator><creator>Jinxiu Zhu, Xue Xu</creator><creator>Xue Xu, Chang Choi</creator><creator>Chang Choi, Xin Su</creator><general>National Dong Hwa University, Computer Center</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20230301</creationdate><title>Multi-scale Generative Adversarial Deblurring Network with Gradient Guidance</title><author>Jinxiu Zhu, Jinxiu Zhu ; Jinxiu Zhu, Xue Xu ; Xue Xu, Chang Choi ; Chang Choi, Xin Su</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c278t-526a005be4af71fe8406a2a19cdfda0a6bc727c3791d9f6468fc4f9420ab47383</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Codec</topic><topic>Feature extraction</topic><topic>Generative adversarial networks</topic><topic>High frequencies</topic><topic>Recovery</topic><topic>Signal to noise ratio</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jinxiu Zhu, Jinxiu Zhu</creatorcontrib><creatorcontrib>Jinxiu Zhu, Xue Xu</creatorcontrib><creatorcontrib>Xue Xu, Chang Choi</creatorcontrib><creatorcontrib>Chang Choi, Xin Su</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Wangji Wanglu Jishu Xuekan = Journal of Internet Technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jinxiu Zhu, Jinxiu Zhu</au><au>Jinxiu Zhu, Xue Xu</au><au>Xue Xu, Chang Choi</au><au>Chang Choi, Xin Su</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multi-scale Generative Adversarial Deblurring Network with Gradient Guidance</atitle><jtitle>Wangji Wanglu Jishu Xuekan = Journal of Internet Technology</jtitle><date>2023-03-01</date><risdate>2023</risdate><volume>24</volume><issue>2</issue><spage>243</spage><epage>255</epage><pages>243-255</pages><issn>1607-9264</issn><eissn>1607-9264</eissn><eissn>2079-4029</eissn><abstract><>With regards to the lack of crisp edges and a poor recovery of high frequency information such as details in deblurred motion pictures, this research proposes a multi-scale adversarial deblurring network with gradient guidance (MADN). The algorithm uses the classical generative adversarial network (GAN) framework, consisting of a generator and a discriminator. The generator includes a multi-scale convolutional network and a gradient feature extraction network. The multi-scale convolutional network extracts image features at different scales with a nested connection residual codec structure to improve the image edge structure recovery and to increase the perceptual field. This gradient network incorporates with intermediate scale features to extract the gradient features of blurred images to obtain their high frequency information. The generator combines the gradient and multiscale features to recover the remaining high-frequency information in a deblurred image. The loss function of MADN is formed in this research combining adversarial loss, pixel L2-norm loss and mean absolute error. Compared to those experimental results obtained from current deblurring algorithms, our experimental results indicate visually clearer images retaining more information such as edges and details. This MADN algorithm enhances the peak signal-to-noise ratio by an average of 3.32dB and the structural similarity by an average of 0.053.<></abstract><cop>Hualien</cop><pub>National Dong Hwa University, Computer Center</pub><doi>10.53106/160792642023032402003</doi><tpages>13</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1607-9264 |
ispartof | Wangji Wanglu Jishu Xuekan = Journal of Internet Technology, 2023-03, Vol.24 (2), p.243-255 |
issn | 1607-9264 1607-9264 2079-4029 |
language | eng |
recordid | cdi_proquest_journals_2791347629 |
source | Alma/SFX Local Collection |
subjects | Algorithms Codec Feature extraction Generative adversarial networks High frequencies Recovery Signal to noise ratio |
title | Multi-scale Generative Adversarial Deblurring Network with Gradient Guidance |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T02%3A39%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multi-scale%20Generative%20Adversarial%20Deblurring%20Network%20with%20Gradient%20Guidance&rft.jtitle=Wangji%20Wanglu%20Jishu%20Xuekan%20=%20Journal%20of%20Internet%20Technology&rft.au=Jinxiu%20Zhu,%20Jinxiu%20Zhu&rft.date=2023-03-01&rft.volume=24&rft.issue=2&rft.spage=243&rft.epage=255&rft.pages=243-255&rft.issn=1607-9264&rft.eissn=1607-9264&rft_id=info:doi/10.53106/160792642023032402003&rft_dat=%3Cproquest_cross%3E2791347629%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2791347629&rft_id=info:pmid/&rfr_iscdi=true |