WTransU-Net: Wiener deconvolution meets multi-scale transformer-based U-net for image deblurring

Deblurring is a classical image restoration problem. Although recent methods have shown promising deblurring performance, most methods still cannot effectively balance the texture details restoration and model complexity. In order to improve the performance of deblurring, some models are designed to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal, image and video processing image and video processing, 2023-11, Vol.17 (8), p.4265-4273
Hauptverfasser: Zhao, Shixin, Xing, Yuanxiu, Xu, Hongyang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 4273
container_issue 8
container_start_page 4265
container_title Signal, image and video processing
container_volume 17
creator Zhao, Shixin
Xing, Yuanxiu
Xu, Hongyang
description Deblurring is a classical image restoration problem. Although recent methods have shown promising deblurring performance, most methods still cannot effectively balance the texture details restoration and model complexity. In order to improve the performance of deblurring, some models are designed to be more complex. In this work, a simple and efficient Wiener deconvolution and multi-scale transformer-based U-Net (WTransU-Net) is proposed to tackle these problems. First, the proposed Wiener feature extraction module uses explicit Wiener deconvolution to extract the Wiener features in the deep feature space. Then, the obtained Wiener features are input into a multi-scale feature reconstruction module which only embeds one transformer refining block in each scale of the U-Net to deblur the image from local and global perspectives. In addition, a multi-scale hybrid loss function is designed to train the WTransU-Net in an end-to-end manner to better learn the content and texture details. The experimental results on benchmark datasets show that compared with the state-of-the-art deblurring methods, the proposed WTransU-Net can achieve better performance with fewer artifacts in terms of quantitatively and qualitatively.
doi_str_mv 10.1007/s11760-023-02659-z
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2864240441</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2864240441</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-510831ac287a4b1b109a2f199e892828f970b56412562695566723dd8bf51eb23</originalsourceid><addsrcrecordid>eNp9UE1LAzEQDaJgqf0DngKeo5lkN5t4k-IXFL209Bj3Y7Zs2e7WJCvYX2_qit4cGGYY3nsz8wi5BH4NnGc3HiBTnHEhY6rUsMMJmYBWkkEGcPrbc3lOZt5veQwpMq30hLytly7v_Iq9YLil6wY7dLTCsu8--nYITd_RHWLwdDe0oWG-zFuk4Uipe7dDx4rcY0VXrMNA44g2u3yDUaFoB-eabnNBzuq89Tj7qVOyerhfzp_Y4vXxeX63YKUEE1gKXEvIS6GzPCmgAG5yUYMxqI3QQtcm40WqEhCpEsqkqVKZkFWlizoFLISckqtRd-_69wF9sNt-cF1caYVWiUh4kkBEiRFVut57h7Xdu3ix-7TA7dFMO5ppo5n220x7iCQ5kvz--BG6P-l_WF8l8Xbn</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2864240441</pqid></control><display><type>article</type><title>WTransU-Net: Wiener deconvolution meets multi-scale transformer-based U-net for image deblurring</title><source>SpringerLink Journals - AutoHoldings</source><creator>Zhao, Shixin ; Xing, Yuanxiu ; Xu, Hongyang</creator><creatorcontrib>Zhao, Shixin ; Xing, Yuanxiu ; Xu, Hongyang</creatorcontrib><description>Deblurring is a classical image restoration problem. Although recent methods have shown promising deblurring performance, most methods still cannot effectively balance the texture details restoration and model complexity. In order to improve the performance of deblurring, some models are designed to be more complex. In this work, a simple and efficient Wiener deconvolution and multi-scale transformer-based U-Net (WTransU-Net) is proposed to tackle these problems. First, the proposed Wiener feature extraction module uses explicit Wiener deconvolution to extract the Wiener features in the deep feature space. Then, the obtained Wiener features are input into a multi-scale feature reconstruction module which only embeds one transformer refining block in each scale of the U-Net to deblur the image from local and global perspectives. In addition, a multi-scale hybrid loss function is designed to train the WTransU-Net in an end-to-end manner to better learn the content and texture details. The experimental results on benchmark datasets show that compared with the state-of-the-art deblurring methods, the proposed WTransU-Net can achieve better performance with fewer artifacts in terms of quantitatively and qualitatively.</description><identifier>ISSN: 1863-1703</identifier><identifier>EISSN: 1863-1711</identifier><identifier>DOI: 10.1007/s11760-023-02659-z</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Complexity ; Computer Imaging ; Computer Science ; Deconvolution ; Feature extraction ; Image Processing and Computer Vision ; Image restoration ; Modules ; Multimedia Information Systems ; Original Paper ; Pattern Recognition and Graphics ; Performance enhancement ; Signal,Image and Speech Processing ; Texture ; Transformers ; Vision</subject><ispartof>Signal, image and video processing, 2023-11, Vol.17 (8), p.4265-4273</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-510831ac287a4b1b109a2f199e892828f970b56412562695566723dd8bf51eb23</citedby><cites>FETCH-LOGICAL-c319t-510831ac287a4b1b109a2f199e892828f970b56412562695566723dd8bf51eb23</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11760-023-02659-z$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11760-023-02659-z$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Zhao, Shixin</creatorcontrib><creatorcontrib>Xing, Yuanxiu</creatorcontrib><creatorcontrib>Xu, Hongyang</creatorcontrib><title>WTransU-Net: Wiener deconvolution meets multi-scale transformer-based U-net for image deblurring</title><title>Signal, image and video processing</title><addtitle>SIViP</addtitle><description>Deblurring is a classical image restoration problem. Although recent methods have shown promising deblurring performance, most methods still cannot effectively balance the texture details restoration and model complexity. In order to improve the performance of deblurring, some models are designed to be more complex. In this work, a simple and efficient Wiener deconvolution and multi-scale transformer-based U-Net (WTransU-Net) is proposed to tackle these problems. First, the proposed Wiener feature extraction module uses explicit Wiener deconvolution to extract the Wiener features in the deep feature space. Then, the obtained Wiener features are input into a multi-scale feature reconstruction module which only embeds one transformer refining block in each scale of the U-Net to deblur the image from local and global perspectives. In addition, a multi-scale hybrid loss function is designed to train the WTransU-Net in an end-to-end manner to better learn the content and texture details. The experimental results on benchmark datasets show that compared with the state-of-the-art deblurring methods, the proposed WTransU-Net can achieve better performance with fewer artifacts in terms of quantitatively and qualitatively.</description><subject>Complexity</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Deconvolution</subject><subject>Feature extraction</subject><subject>Image Processing and Computer Vision</subject><subject>Image restoration</subject><subject>Modules</subject><subject>Multimedia Information Systems</subject><subject>Original Paper</subject><subject>Pattern Recognition and Graphics</subject><subject>Performance enhancement</subject><subject>Signal,Image and Speech Processing</subject><subject>Texture</subject><subject>Transformers</subject><subject>Vision</subject><issn>1863-1703</issn><issn>1863-1711</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9UE1LAzEQDaJgqf0DngKeo5lkN5t4k-IXFL209Bj3Y7Zs2e7WJCvYX2_qit4cGGYY3nsz8wi5BH4NnGc3HiBTnHEhY6rUsMMJmYBWkkEGcPrbc3lOZt5veQwpMq30hLytly7v_Iq9YLil6wY7dLTCsu8--nYITd_RHWLwdDe0oWG-zFuk4Uipe7dDx4rcY0VXrMNA44g2u3yDUaFoB-eabnNBzuq89Tj7qVOyerhfzp_Y4vXxeX63YKUEE1gKXEvIS6GzPCmgAG5yUYMxqI3QQtcm40WqEhCpEsqkqVKZkFWlizoFLISckqtRd-_69wF9sNt-cF1caYVWiUh4kkBEiRFVut57h7Xdu3ix-7TA7dFMO5ppo5n220x7iCQ5kvz--BG6P-l_WF8l8Xbn</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Zhao, Shixin</creator><creator>Xing, Yuanxiu</creator><creator>Xu, Hongyang</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20231101</creationdate><title>WTransU-Net: Wiener deconvolution meets multi-scale transformer-based U-net for image deblurring</title><author>Zhao, Shixin ; Xing, Yuanxiu ; Xu, Hongyang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-510831ac287a4b1b109a2f199e892828f970b56412562695566723dd8bf51eb23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Complexity</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Deconvolution</topic><topic>Feature extraction</topic><topic>Image Processing and Computer Vision</topic><topic>Image restoration</topic><topic>Modules</topic><topic>Multimedia Information Systems</topic><topic>Original Paper</topic><topic>Pattern Recognition and Graphics</topic><topic>Performance enhancement</topic><topic>Signal,Image and Speech Processing</topic><topic>Texture</topic><topic>Transformers</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Shixin</creatorcontrib><creatorcontrib>Xing, Yuanxiu</creatorcontrib><creatorcontrib>Xu, Hongyang</creatorcontrib><collection>CrossRef</collection><jtitle>Signal, image and video processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhao, Shixin</au><au>Xing, Yuanxiu</au><au>Xu, Hongyang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>WTransU-Net: Wiener deconvolution meets multi-scale transformer-based U-net for image deblurring</atitle><jtitle>Signal, image and video processing</jtitle><stitle>SIViP</stitle><date>2023-11-01</date><risdate>2023</risdate><volume>17</volume><issue>8</issue><spage>4265</spage><epage>4273</epage><pages>4265-4273</pages><issn>1863-1703</issn><eissn>1863-1711</eissn><abstract>Deblurring is a classical image restoration problem. Although recent methods have shown promising deblurring performance, most methods still cannot effectively balance the texture details restoration and model complexity. In order to improve the performance of deblurring, some models are designed to be more complex. In this work, a simple and efficient Wiener deconvolution and multi-scale transformer-based U-Net (WTransU-Net) is proposed to tackle these problems. First, the proposed Wiener feature extraction module uses explicit Wiener deconvolution to extract the Wiener features in the deep feature space. Then, the obtained Wiener features are input into a multi-scale feature reconstruction module which only embeds one transformer refining block in each scale of the U-Net to deblur the image from local and global perspectives. In addition, a multi-scale hybrid loss function is designed to train the WTransU-Net in an end-to-end manner to better learn the content and texture details. The experimental results on benchmark datasets show that compared with the state-of-the-art deblurring methods, the proposed WTransU-Net can achieve better performance with fewer artifacts in terms of quantitatively and qualitatively.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s11760-023-02659-z</doi><tpages>9</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1863-1703
ispartof Signal, image and video processing, 2023-11, Vol.17 (8), p.4265-4273
issn 1863-1703
1863-1711
language eng
recordid cdi_proquest_journals_2864240441
source SpringerLink Journals - AutoHoldings
subjects Complexity
Computer Imaging
Computer Science
Deconvolution
Feature extraction
Image Processing and Computer Vision
Image restoration
Modules
Multimedia Information Systems
Original Paper
Pattern Recognition and Graphics
Performance enhancement
Signal,Image and Speech Processing
Texture
Transformers
Vision
title WTransU-Net: Wiener deconvolution meets multi-scale transformer-based U-net for image deblurring
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T02%3A18%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=WTransU-Net:%20Wiener%20deconvolution%20meets%20multi-scale%20transformer-based%20U-net%20for%20image%20deblurring&rft.jtitle=Signal,%20image%20and%20video%20processing&rft.au=Zhao,%20Shixin&rft.date=2023-11-01&rft.volume=17&rft.issue=8&rft.spage=4265&rft.epage=4273&rft.pages=4265-4273&rft.issn=1863-1703&rft.eissn=1863-1711&rft_id=info:doi/10.1007/s11760-023-02659-z&rft_dat=%3Cproquest_cross%3E2864240441%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2864240441&rft_id=info:pmid/&rfr_iscdi=true