DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Discriminative Multi-Scale Deep Features

Albeit great success has been achieved in image defocus blur detection, there are still several unsolved challenges, e.g., interference of background clutter, scale sensitivity and missing boundary details of blur regions. To deal with these issues, we propose a deep neural network which recurrently...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2022-02, Vol.44 (2), p.955-968
Hauptverfasser: Tang, Chang, Liu, Xinwang, Zheng, Xiao, Li, Wanqing, Xiong, Jian, Wang, Lizhe, Zomaya, Albert Y., Longo, Antonella
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 968
container_issue 2
container_start_page 955
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 44
creator Tang, Chang
Liu, Xinwang
Zheng, Xiao
Li, Wanqing
Xiong, Jian
Wang, Lizhe
Zomaya, Albert Y.
Longo, Antonella
description Albeit great success has been achieved in image defocus blur detection, there are still several unsolved challenges, e.g., interference of background clutter, scale sensitivity and missing boundary details of blur regions. To deal with these issues, we propose a deep neural network which recurrently fuses and refines multi-scale deep features (DeFusionNet) for defocus blur detection. We first fuse the features from different layers of FCN as shallow features and semantic features, respectively. Then, the fused shallow features are propagated to deep layers for refining the details of detected defocus blur regions, and the fused semantic features are propagated to shallow layers to assist in better locating blur regions. The fusion and refinement are carried out recurrently. In order to narrow the gap between low-level and high-level features, we embed a feature adaptation module before feature propagating to exploit the complementary information as well as reduce the contradictory response of different feature layers. Since different feature channels are with different extents of discrimination for detecting blur regions, we design a channel attention module to select discriminative features for feature refinement. Finally, the output of each layer at last recurrent step are fused to obtain the final result. We collect a new dataset consists of various challenging images and their pixel-wise annotations for promoting further study. Extensive experiments on two commonly used datasets and our newly collected one are conducted to demonstrate both the efficacy and efficiency of DeFusionNet.
doi_str_mv 10.1109/TPAMI.2020.3014629
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2617492222</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9161280</ieee_id><sourcerecordid>2431809587</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-4728719cfbaa9fbcb4f2c3704b0a9a4a0df493b360e72be02e430d5aadc394fd3</originalsourceid><addsrcrecordid>eNpdkUtPGzEUha2qqAToH2ilaiQ2bCZcP-bh7oAQQOIlSNeWx3NdGU1mgj1G4t_jkMACb6zr852jax1CflGYUgryeHF_cnM1ZcBgyoGKkslvZEIllzkvuPxOJkBLltc1q3fJXghPkKAC-A-yy1lVSKhhQlYznMfghv72fPE3m6EdTAzZaRd9GkY0Y5KyF6ezBzTRe-zH7jVbO_r_me7b9Gxdvx5mLhjvlq7Xo3vB7CZ2o8sfje4wBeEqm6Meo8dwQHas7gL-3N775N_8fHF2mV_fXVydnVznhhd0zEXF6opKYxutpW1MIywzvALRgJZaaGitkLzhJWDFGgSGgkNbaN0aLoVt-T452uSu_PAcMYxqmTbErtM9DjEoJjitQRZ1ldDDL-jTEH2ftlOspJWQLJ1EsQ1l_BCCR6tW6b_avyoKat2Heu9DrftQ2z6S6c82OjZLbD8tHwUk4PcGcIj4KUtaUpbUN61Hj00</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2617492222</pqid></control><display><type>article</type><title>DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Discriminative Multi-Scale Deep Features</title><source>IEEE Electronic Library (IEL)</source><creator>Tang, Chang ; Liu, Xinwang ; Zheng, Xiao ; Li, Wanqing ; Xiong, Jian ; Wang, Lizhe ; Zomaya, Albert Y. ; Longo, Antonella</creator><creatorcontrib>Tang, Chang ; Liu, Xinwang ; Zheng, Xiao ; Li, Wanqing ; Xiong, Jian ; Wang, Lizhe ; Zomaya, Albert Y. ; Longo, Antonella</creatorcontrib><description>Albeit great success has been achieved in image defocus blur detection, there are still several unsolved challenges, e.g., interference of background clutter, scale sensitivity and missing boundary details of blur regions. To deal with these issues, we propose a deep neural network which recurrently fuses and refines multi-scale deep features (DeFusionNet) for defocus blur detection. We first fuse the features from different layers of FCN as shallow features and semantic features, respectively. Then, the fused shallow features are propagated to deep layers for refining the details of detected defocus blur regions, and the fused semantic features are propagated to shallow layers to assist in better locating blur regions. The fusion and refinement are carried out recurrently. In order to narrow the gap between low-level and high-level features, we embed a feature adaptation module before feature propagating to exploit the complementary information as well as reduce the contradictory response of different feature layers. Since different feature channels are with different extents of discrimination for detecting blur regions, we design a channel attention module to select discriminative features for feature refinement. Finally, the output of each layer at last recurrent step are fused to obtain the final result. We collect a new dataset consists of various challenging images and their pixel-wise annotations for promoting further study. Extensive experiments on two commonly used datasets and our newly collected one are conducted to demonstrate both the efficacy and efficiency of DeFusionNet.</description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2020.3014629</identifier><identifier>PMID: 32759080</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Annotations ; Artificial neural networks ; channel attention ; Clutter ; Datasets ; Defocus blur detection ; Feature extraction ; feature fusing ; Fuses ; Image edge detection ; Machine learning ; Modules ; multi-scale features ; Neural networks ; Semantics ; Task analysis</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2022-02, Vol.44 (2), p.955-968</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-4728719cfbaa9fbcb4f2c3704b0a9a4a0df493b360e72be02e430d5aadc394fd3</citedby><cites>FETCH-LOGICAL-c351t-4728719cfbaa9fbcb4f2c3704b0a9a4a0df493b360e72be02e430d5aadc394fd3</cites><orcidid>0000-0001-9066-1475 ; 0000-0002-6902-0160 ; 0000-0002-4427-2687 ; 0000-0003-2766-0845 ; 0000-0002-6515-7696 ; 0000-0002-3090-1059 ; 0000-0002-8744-8144</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9161280$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54736</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9161280$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32759080$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Tang, Chang</creatorcontrib><creatorcontrib>Liu, Xinwang</creatorcontrib><creatorcontrib>Zheng, Xiao</creatorcontrib><creatorcontrib>Li, Wanqing</creatorcontrib><creatorcontrib>Xiong, Jian</creatorcontrib><creatorcontrib>Wang, Lizhe</creatorcontrib><creatorcontrib>Zomaya, Albert Y.</creatorcontrib><creatorcontrib>Longo, Antonella</creatorcontrib><title>DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Discriminative Multi-Scale Deep Features</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Albeit great success has been achieved in image defocus blur detection, there are still several unsolved challenges, e.g., interference of background clutter, scale sensitivity and missing boundary details of blur regions. To deal with these issues, we propose a deep neural network which recurrently fuses and refines multi-scale deep features (DeFusionNet) for defocus blur detection. We first fuse the features from different layers of FCN as shallow features and semantic features, respectively. Then, the fused shallow features are propagated to deep layers for refining the details of detected defocus blur regions, and the fused semantic features are propagated to shallow layers to assist in better locating blur regions. The fusion and refinement are carried out recurrently. In order to narrow the gap between low-level and high-level features, we embed a feature adaptation module before feature propagating to exploit the complementary information as well as reduce the contradictory response of different feature layers. Since different feature channels are with different extents of discrimination for detecting blur regions, we design a channel attention module to select discriminative features for feature refinement. Finally, the output of each layer at last recurrent step are fused to obtain the final result. We collect a new dataset consists of various challenging images and their pixel-wise annotations for promoting further study. Extensive experiments on two commonly used datasets and our newly collected one are conducted to demonstrate both the efficacy and efficiency of DeFusionNet.</description><subject>Annotations</subject><subject>Artificial neural networks</subject><subject>channel attention</subject><subject>Clutter</subject><subject>Datasets</subject><subject>Defocus blur detection</subject><subject>Feature extraction</subject><subject>feature fusing</subject><subject>Fuses</subject><subject>Image edge detection</subject><subject>Machine learning</subject><subject>Modules</subject><subject>multi-scale features</subject><subject>Neural networks</subject><subject>Semantics</subject><subject>Task analysis</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkUtPGzEUha2qqAToH2ilaiQ2bCZcP-bh7oAQQOIlSNeWx3NdGU1mgj1G4t_jkMACb6zr852jax1CflGYUgryeHF_cnM1ZcBgyoGKkslvZEIllzkvuPxOJkBLltc1q3fJXghPkKAC-A-yy1lVSKhhQlYznMfghv72fPE3m6EdTAzZaRd9GkY0Y5KyF6ezBzTRe-zH7jVbO_r_me7b9Gxdvx5mLhjvlq7Xo3vB7CZ2o8sfje4wBeEqm6Meo8dwQHas7gL-3N775N_8fHF2mV_fXVydnVznhhd0zEXF6opKYxutpW1MIywzvALRgJZaaGitkLzhJWDFGgSGgkNbaN0aLoVt-T452uSu_PAcMYxqmTbErtM9DjEoJjitQRZ1ldDDL-jTEH2ftlOspJWQLJ1EsQ1l_BCCR6tW6b_avyoKat2Heu9DrftQ2z6S6c82OjZLbD8tHwUk4PcGcIj4KUtaUpbUN61Hj00</recordid><startdate>20220201</startdate><enddate>20220201</enddate><creator>Tang, Chang</creator><creator>Liu, Xinwang</creator><creator>Zheng, Xiao</creator><creator>Li, Wanqing</creator><creator>Xiong, Jian</creator><creator>Wang, Lizhe</creator><creator>Zomaya, Albert Y.</creator><creator>Longo, Antonella</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-9066-1475</orcidid><orcidid>https://orcid.org/0000-0002-6902-0160</orcidid><orcidid>https://orcid.org/0000-0002-4427-2687</orcidid><orcidid>https://orcid.org/0000-0003-2766-0845</orcidid><orcidid>https://orcid.org/0000-0002-6515-7696</orcidid><orcidid>https://orcid.org/0000-0002-3090-1059</orcidid><orcidid>https://orcid.org/0000-0002-8744-8144</orcidid></search><sort><creationdate>20220201</creationdate><title>DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Discriminative Multi-Scale Deep Features</title><author>Tang, Chang ; Liu, Xinwang ; Zheng, Xiao ; Li, Wanqing ; Xiong, Jian ; Wang, Lizhe ; Zomaya, Albert Y. ; Longo, Antonella</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-4728719cfbaa9fbcb4f2c3704b0a9a4a0df493b360e72be02e430d5aadc394fd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Annotations</topic><topic>Artificial neural networks</topic><topic>channel attention</topic><topic>Clutter</topic><topic>Datasets</topic><topic>Defocus blur detection</topic><topic>Feature extraction</topic><topic>feature fusing</topic><topic>Fuses</topic><topic>Image edge detection</topic><topic>Machine learning</topic><topic>Modules</topic><topic>multi-scale features</topic><topic>Neural networks</topic><topic>Semantics</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tang, Chang</creatorcontrib><creatorcontrib>Liu, Xinwang</creatorcontrib><creatorcontrib>Zheng, Xiao</creatorcontrib><creatorcontrib>Li, Wanqing</creatorcontrib><creatorcontrib>Xiong, Jian</creatorcontrib><creatorcontrib>Wang, Lizhe</creatorcontrib><creatorcontrib>Zomaya, Albert Y.</creatorcontrib><creatorcontrib>Longo, Antonella</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tang, Chang</au><au>Liu, Xinwang</au><au>Zheng, Xiao</au><au>Li, Wanqing</au><au>Xiong, Jian</au><au>Wang, Lizhe</au><au>Zomaya, Albert Y.</au><au>Longo, Antonella</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Discriminative Multi-Scale Deep Features</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2022-02-01</date><risdate>2022</risdate><volume>44</volume><issue>2</issue><spage>955</spage><epage>968</epage><pages>955-968</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Albeit great success has been achieved in image defocus blur detection, there are still several unsolved challenges, e.g., interference of background clutter, scale sensitivity and missing boundary details of blur regions. To deal with these issues, we propose a deep neural network which recurrently fuses and refines multi-scale deep features (DeFusionNet) for defocus blur detection. We first fuse the features from different layers of FCN as shallow features and semantic features, respectively. Then, the fused shallow features are propagated to deep layers for refining the details of detected defocus blur regions, and the fused semantic features are propagated to shallow layers to assist in better locating blur regions. The fusion and refinement are carried out recurrently. In order to narrow the gap between low-level and high-level features, we embed a feature adaptation module before feature propagating to exploit the complementary information as well as reduce the contradictory response of different feature layers. Since different feature channels are with different extents of discrimination for detecting blur regions, we design a channel attention module to select discriminative features for feature refinement. Finally, the output of each layer at last recurrent step are fused to obtain the final result. We collect a new dataset consists of various challenging images and their pixel-wise annotations for promoting further study. Extensive experiments on two commonly used datasets and our newly collected one are conducted to demonstrate both the efficacy and efficiency of DeFusionNet.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>32759080</pmid><doi>10.1109/TPAMI.2020.3014629</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-9066-1475</orcidid><orcidid>https://orcid.org/0000-0002-6902-0160</orcidid><orcidid>https://orcid.org/0000-0002-4427-2687</orcidid><orcidid>https://orcid.org/0000-0003-2766-0845</orcidid><orcidid>https://orcid.org/0000-0002-6515-7696</orcidid><orcidid>https://orcid.org/0000-0002-3090-1059</orcidid><orcidid>https://orcid.org/0000-0002-8744-8144</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2022-02, Vol.44 (2), p.955-968
issn 0162-8828
1939-3539
2160-9292
language eng
recordid cdi_proquest_journals_2617492222
source IEEE Electronic Library (IEL)
subjects Annotations
Artificial neural networks
channel attention
Clutter
Datasets
Defocus blur detection
Feature extraction
feature fusing
Fuses
Image edge detection
Machine learning
Modules
multi-scale features
Neural networks
Semantics
Task analysis
title DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Discriminative Multi-Scale Deep Features
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T07%3A58%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DeFusionNET:%20Defocus%20Blur%20Detection%20via%20Recurrently%20Fusing%20and%20Refining%20Discriminative%20Multi-Scale%20Deep%20Features&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Tang,%20Chang&rft.date=2022-02-01&rft.volume=44&rft.issue=2&rft.spage=955&rft.epage=968&rft.pages=955-968&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2020.3014629&rft_dat=%3Cproquest_RIE%3E2431809587%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2617492222&rft_id=info:pmid/32759080&rft_ieee_id=9161280&rfr_iscdi=true