Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution

The learning-based methods for single image super- resolution (SISR) can reconstruct realistic details, but they suffer severe performance degradation for low-light images because of their ignorance of negative effects of illumination, and even produce overexposure for unevenly illuminated images. I...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2022-12, Vol.32 (12), p.8436-8449
Hauptverfasser: Cheng, Deqiang, Chen, Liangliang, Lv, Chen, Guo, Lin, Kou, Qiqi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 8449
container_issue 12
container_start_page 8436
container_title IEEE transactions on circuits and systems for video technology
container_volume 32
creator Cheng, Deqiang
Chen, Liangliang
Lv, Chen
Guo, Lin
Kou, Qiqi
description The learning-based methods for single image super- resolution (SISR) can reconstruct realistic details, but they suffer severe performance degradation for low-light images because of their ignorance of negative effects of illumination, and even produce overexposure for unevenly illuminated images. In this paper, we pioneer an anti-illumination approach toward SISR named Light-guided and Cross-fusion U-Net (LCUN), which can simultaneously improve the texture details and lighting of low-resolution images. In our design, we develop a U-Net for SISR (SRU) to reconstruct super- resolution (SR) images from coarse to fine, effectively suppressing noise and absorbing illuminance information. In particular, the proposed Intensity Estimation Unit (IEU) generates the light intensity map and innovatively guides SRU to adaptively brighten inconsistent illumination. Further, aiming at efficiently utilizing key features and avoiding light interference, an Advanced Fusion Block (AFB) is developed to cross-fuse low-resolution features, reconstructed features and illuminance features in pairs. Moreover, SRU introduces a gate mechanism to dynamically adjust its composition, overcoming the limitations of fixed-scale SR. LCUN is compared with the retrained SISR methods and the combined SISR methods on low-light and uneven-light images. Extensive experiments demonstrate that LCUN advances the state-of-the-arts SISR methods in terms of objective metrics and visual effects, and it can reconstruct relatively clear textures and cope with complex lighting.
doi_str_mv 10.1109/TCSVT.2022.3194169
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2747611662</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9841591</ieee_id><sourcerecordid>2747611662</sourcerecordid><originalsourceid>FETCH-LOGICAL-c225t-dc00964a8ab7b2914bcd040361ce0ee5ec73c9e6342e365c60312d05461335503</originalsourceid><addsrcrecordid>eNo9kE9PwkAQxRujiYh-Ab008bw4s__aPRIiSELUCHjdlO2AJaXF3fbgt7cV4mkmM-_Ny_yi6B5hhAjmaTVZfq5GHDgfCTQStbmIBqhUyjgHddn1oJClHNV1dBPCHgBlKpNB9L4odl8Nm7VFTnmcVXk88XUIbNqGoq7iNXulJt7WPh5XTcHmZdkeiipr-t38kO0oXrZH8uyDQl22_fg2utpmZaC7cx1G6-nzavLCFm-z-WS8YI5z1bDcARgtszTbJBtuUG5cDhKERkdApMglwhnSQnISWjkNAnkOSmoUQikQw-jxdPfo6--WQmP3deurLtLyRCYaUWveqfhJ5fqvPG3t0ReHzP9YBNuTs3_kbE_Onsl1poeTqSCif4NJJSqD4hfSumiZ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2747611662</pqid></control><display><type>article</type><title>Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution</title><source>IEEE Electronic Library (IEL)</source><creator>Cheng, Deqiang ; Chen, Liangliang ; Lv, Chen ; Guo, Lin ; Kou, Qiqi</creator><creatorcontrib>Cheng, Deqiang ; Chen, Liangliang ; Lv, Chen ; Guo, Lin ; Kou, Qiqi</creatorcontrib><description>The learning-based methods for single image super- resolution (SISR) can reconstruct realistic details, but they suffer severe performance degradation for low-light images because of their ignorance of negative effects of illumination, and even produce overexposure for unevenly illuminated images. In this paper, we pioneer an anti-illumination approach toward SISR named Light-guided and Cross-fusion U-Net (LCUN), which can simultaneously improve the texture details and lighting of low-resolution images. In our design, we develop a U-Net for SISR (SRU) to reconstruct super- resolution (SR) images from coarse to fine, effectively suppressing noise and absorbing illuminance information. In particular, the proposed Intensity Estimation Unit (IEU) generates the light intensity map and innovatively guides SRU to adaptively brighten inconsistent illumination. Further, aiming at efficiently utilizing key features and avoiding light interference, an Advanced Fusion Block (AFB) is developed to cross-fuse low-resolution features, reconstructed features and illuminance features in pairs. Moreover, SRU introduces a gate mechanism to dynamically adjust its composition, overcoming the limitations of fixed-scale SR. LCUN is compared with the retrained SISR methods and the combined SISR methods on low-light and uneven-light images. Extensive experiments demonstrate that LCUN advances the state-of-the-arts SISR methods in terms of objective metrics and visual effects, and it can reconstruct relatively clear textures and cope with complex lighting.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2022.3194169</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>anti-illumination ; cross-fusion ; Estimation ; Illuminance ; Illumination ; Image enhancement ; Image reconstruction ; Image resolution ; Image super-resolution ; intensity estimation ; Interference ; Light ; Lighting ; low-light image ; Luminous intensity ; Performance degradation ; Photodegradation ; Robustness ; Superresolution ; Visual effects</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2022-12, Vol.32 (12), p.8436-8449</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c225t-dc00964a8ab7b2914bcd040361ce0ee5ec73c9e6342e365c60312d05461335503</citedby><cites>FETCH-LOGICAL-c225t-dc00964a8ab7b2914bcd040361ce0ee5ec73c9e6342e365c60312d05461335503</cites><orcidid>0000-0001-8831-1994 ; 0000-0001-9097-0657 ; 0000-0003-2873-2636</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9841591$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9841591$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Cheng, Deqiang</creatorcontrib><creatorcontrib>Chen, Liangliang</creatorcontrib><creatorcontrib>Lv, Chen</creatorcontrib><creatorcontrib>Guo, Lin</creatorcontrib><creatorcontrib>Kou, Qiqi</creatorcontrib><title>Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>The learning-based methods for single image super- resolution (SISR) can reconstruct realistic details, but they suffer severe performance degradation for low-light images because of their ignorance of negative effects of illumination, and even produce overexposure for unevenly illuminated images. In this paper, we pioneer an anti-illumination approach toward SISR named Light-guided and Cross-fusion U-Net (LCUN), which can simultaneously improve the texture details and lighting of low-resolution images. In our design, we develop a U-Net for SISR (SRU) to reconstruct super- resolution (SR) images from coarse to fine, effectively suppressing noise and absorbing illuminance information. In particular, the proposed Intensity Estimation Unit (IEU) generates the light intensity map and innovatively guides SRU to adaptively brighten inconsistent illumination. Further, aiming at efficiently utilizing key features and avoiding light interference, an Advanced Fusion Block (AFB) is developed to cross-fuse low-resolution features, reconstructed features and illuminance features in pairs. Moreover, SRU introduces a gate mechanism to dynamically adjust its composition, overcoming the limitations of fixed-scale SR. LCUN is compared with the retrained SISR methods and the combined SISR methods on low-light and uneven-light images. Extensive experiments demonstrate that LCUN advances the state-of-the-arts SISR methods in terms of objective metrics and visual effects, and it can reconstruct relatively clear textures and cope with complex lighting.</description><subject>anti-illumination</subject><subject>cross-fusion</subject><subject>Estimation</subject><subject>Illuminance</subject><subject>Illumination</subject><subject>Image enhancement</subject><subject>Image reconstruction</subject><subject>Image resolution</subject><subject>Image super-resolution</subject><subject>intensity estimation</subject><subject>Interference</subject><subject>Light</subject><subject>Lighting</subject><subject>low-light image</subject><subject>Luminous intensity</subject><subject>Performance degradation</subject><subject>Photodegradation</subject><subject>Robustness</subject><subject>Superresolution</subject><subject>Visual effects</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kE9PwkAQxRujiYh-Ab008bw4s__aPRIiSELUCHjdlO2AJaXF3fbgt7cV4mkmM-_Ny_yi6B5hhAjmaTVZfq5GHDgfCTQStbmIBqhUyjgHddn1oJClHNV1dBPCHgBlKpNB9L4odl8Nm7VFTnmcVXk88XUIbNqGoq7iNXulJt7WPh5XTcHmZdkeiipr-t38kO0oXrZH8uyDQl22_fg2utpmZaC7cx1G6-nzavLCFm-z-WS8YI5z1bDcARgtszTbJBtuUG5cDhKERkdApMglwhnSQnISWjkNAnkOSmoUQikQw-jxdPfo6--WQmP3deurLtLyRCYaUWveqfhJ5fqvPG3t0ReHzP9YBNuTs3_kbE_Onsl1poeTqSCif4NJJSqD4hfSumiZ</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Cheng, Deqiang</creator><creator>Chen, Liangliang</creator><creator>Lv, Chen</creator><creator>Guo, Lin</creator><creator>Kou, Qiqi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-8831-1994</orcidid><orcidid>https://orcid.org/0000-0001-9097-0657</orcidid><orcidid>https://orcid.org/0000-0003-2873-2636</orcidid></search><sort><creationdate>20221201</creationdate><title>Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution</title><author>Cheng, Deqiang ; Chen, Liangliang ; Lv, Chen ; Guo, Lin ; Kou, Qiqi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c225t-dc00964a8ab7b2914bcd040361ce0ee5ec73c9e6342e365c60312d05461335503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>anti-illumination</topic><topic>cross-fusion</topic><topic>Estimation</topic><topic>Illuminance</topic><topic>Illumination</topic><topic>Image enhancement</topic><topic>Image reconstruction</topic><topic>Image resolution</topic><topic>Image super-resolution</topic><topic>intensity estimation</topic><topic>Interference</topic><topic>Light</topic><topic>Lighting</topic><topic>low-light image</topic><topic>Luminous intensity</topic><topic>Performance degradation</topic><topic>Photodegradation</topic><topic>Robustness</topic><topic>Superresolution</topic><topic>Visual effects</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cheng, Deqiang</creatorcontrib><creatorcontrib>Chen, Liangliang</creatorcontrib><creatorcontrib>Lv, Chen</creatorcontrib><creatorcontrib>Guo, Lin</creatorcontrib><creatorcontrib>Kou, Qiqi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cheng, Deqiang</au><au>Chen, Liangliang</au><au>Lv, Chen</au><au>Guo, Lin</au><au>Kou, Qiqi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>32</volume><issue>12</issue><spage>8436</spage><epage>8449</epage><pages>8436-8449</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>The learning-based methods for single image super- resolution (SISR) can reconstruct realistic details, but they suffer severe performance degradation for low-light images because of their ignorance of negative effects of illumination, and even produce overexposure for unevenly illuminated images. In this paper, we pioneer an anti-illumination approach toward SISR named Light-guided and Cross-fusion U-Net (LCUN), which can simultaneously improve the texture details and lighting of low-resolution images. In our design, we develop a U-Net for SISR (SRU) to reconstruct super- resolution (SR) images from coarse to fine, effectively suppressing noise and absorbing illuminance information. In particular, the proposed Intensity Estimation Unit (IEU) generates the light intensity map and innovatively guides SRU to adaptively brighten inconsistent illumination. Further, aiming at efficiently utilizing key features and avoiding light interference, an Advanced Fusion Block (AFB) is developed to cross-fuse low-resolution features, reconstructed features and illuminance features in pairs. Moreover, SRU introduces a gate mechanism to dynamically adjust its composition, overcoming the limitations of fixed-scale SR. LCUN is compared with the retrained SISR methods and the combined SISR methods on low-light and uneven-light images. Extensive experiments demonstrate that LCUN advances the state-of-the-arts SISR methods in terms of objective metrics and visual effects, and it can reconstruct relatively clear textures and cope with complex lighting.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2022.3194169</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-8831-1994</orcidid><orcidid>https://orcid.org/0000-0001-9097-0657</orcidid><orcidid>https://orcid.org/0000-0003-2873-2636</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2022-12, Vol.32 (12), p.8436-8449
issn 1051-8215
1558-2205
language eng
recordid cdi_proquest_journals_2747611662
source IEEE Electronic Library (IEL)
subjects anti-illumination
cross-fusion
Estimation
Illuminance
Illumination
Image enhancement
Image reconstruction
Image resolution
Image super-resolution
intensity estimation
Interference
Light
Lighting
low-light image
Luminous intensity
Performance degradation
Photodegradation
Robustness
Superresolution
Visual effects
title Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T05%3A59%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Light-Guided%20and%20Cross-Fusion%20U-Net%20for%20Anti-Illumination%20Image%20Super-Resolution&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Cheng,%20Deqiang&rft.date=2022-12-01&rft.volume=32&rft.issue=12&rft.spage=8436&rft.epage=8449&rft.pages=8436-8449&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2022.3194169&rft_dat=%3Cproquest_RIE%3E2747611662%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2747611662&rft_id=info:pmid/&rft_ieee_id=9841591&rfr_iscdi=true