Enlightening Low-Light Images With Dynamic Guidance for Context Enrichment

Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g. , low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts ( e.g ., noise in smooth regions, over-exposure in well-exposed regions and low contras...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2022-08, Vol.32 (8), p.5068-5079
Hauptverfasser: Zhu, Lingyu, Yang, Wenhan, Chen, Baoliang, Lu, Fangbo, Wang, Shiqi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5079
container_issue 8
container_start_page 5068
container_title IEEE transactions on circuits and systems for video technology
container_volume 32
creator Zhu, Lingyu
Yang, Wenhan
Chen, Baoliang
Lu, Fangbo
Wang, Shiqi
description Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g. , low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts ( e.g ., noise in smooth regions, over-exposure in well-exposed regions and low contrast around edges) cast major challenges to the low-light image enhancement. Herein, we propose a new methodology by imposing a learnable guidance map from the signal and deep priors, making the deep neural network adaptively enhance low-light images in a region-dependent manner. The enhancement capability of the learnable guidance map is further exploited with the multi-scale dilated context collaboration, leading to contextually enriched feature representations extracted by the model with various receptive fields. Through assimilating the intrinsic perceptual information from the learned guidance map, richer and more realistic textures are generated. Extensive experiments on real low-light images demonstrate the effectiveness of our method, which delivers superior results quantitatively and qualitatively. The code is available at https://github.com/lingyzhu0101/GEMSC to facilitate future research.
doi_str_mv 10.1109/TCSVT.2022.3146731
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_9693933</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9693933</ieee_id><sourcerecordid>2697571658</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-a35f050688ed1b54b24df9e0cb8f815157ceaa366487055c39fe2d5afe91138f3</originalsourceid><addsrcrecordid>eNo9kEFOwzAQRS0EEqVwAdhYYp3isTOJvUSllKJILCiwtFzHbl01TklSQW9PShGr-SP9NyM9Qq6BjQCYupuPX9_nI844HwlIs1zACRkAokw4Z3jaZ4aQSA54Ti7ads0YpDLNB-R5EjdhuepcDHFJi_orKQ4rnVVm6Vr6EboVfdhHUwVLp7tQmmgd9XVDx3Xs3HdHJ7EJdlW52F2SM282rbv6m0Py9jiZj5-S4mU6G98XieUKu8QI9AxZJqUrYYHpgqelV47ZhfQSEDC3zhiRZanMGaIVyjteovFOAQjpxZDcHu9um_pz59pOr-tdE_uXmmcqxxwylH2LH1u2qdu2cV5vm1CZZq-B6YMz_etMH5zpP2c9dHOEgnPuH1CZEkoI8QO7qWel</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2697571658</pqid></control><display><type>article</type><title>Enlightening Low-Light Images With Dynamic Guidance for Context Enrichment</title><source>IEEE Electronic Library (IEL)</source><creator>Zhu, Lingyu ; Yang, Wenhan ; Chen, Baoliang ; Lu, Fangbo ; Wang, Shiqi</creator><creatorcontrib>Zhu, Lingyu ; Yang, Wenhan ; Chen, Baoliang ; Lu, Fangbo ; Wang, Shiqi</creatorcontrib><description>Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g. , low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts ( e.g ., noise in smooth regions, over-exposure in well-exposed regions and low contrast around edges) cast major challenges to the low-light image enhancement. Herein, we propose a new methodology by imposing a learnable guidance map from the signal and deep priors, making the deep neural network adaptively enhance low-light images in a region-dependent manner. The enhancement capability of the learnable guidance map is further exploited with the multi-scale dilated context collaboration, leading to contextually enriched feature representations extracted by the model with various receptive fields. Through assimilating the intrinsic perceptual information from the learned guidance map, richer and more realistic textures are generated. Extensive experiments on real low-light images demonstrate the effectiveness of our method, which delivers superior results quantitatively and qualitatively. The code is available at https://github.com/lingyzhu0101/GEMSC to facilitate future research.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2022.3146731</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation models ; Artificial neural networks ; Context ; contextual feature ; Degradation ; Feature extraction ; guidance map ; Histograms ; Image acquisition ; Image color analysis ; Image contrast ; Image edge detection ; Image enhancement ; Lighting ; Low visibility ; Low-light image enhancement</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2022-08, Vol.32 (8), p.5068-5079</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-a35f050688ed1b54b24df9e0cb8f815157ceaa366487055c39fe2d5afe91138f3</citedby><cites>FETCH-LOGICAL-c295t-a35f050688ed1b54b24df9e0cb8f815157ceaa366487055c39fe2d5afe91138f3</cites><orcidid>0000-0002-3583-959X ; 0000-0001-7608-7913 ; 0000-0003-4884-6956</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9693933$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9693933$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhu, Lingyu</creatorcontrib><creatorcontrib>Yang, Wenhan</creatorcontrib><creatorcontrib>Chen, Baoliang</creatorcontrib><creatorcontrib>Lu, Fangbo</creatorcontrib><creatorcontrib>Wang, Shiqi</creatorcontrib><title>Enlightening Low-Light Images With Dynamic Guidance for Context Enrichment</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g. , low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts ( e.g ., noise in smooth regions, over-exposure in well-exposed regions and low contrast around edges) cast major challenges to the low-light image enhancement. Herein, we propose a new methodology by imposing a learnable guidance map from the signal and deep priors, making the deep neural network adaptively enhance low-light images in a region-dependent manner. The enhancement capability of the learnable guidance map is further exploited with the multi-scale dilated context collaboration, leading to contextually enriched feature representations extracted by the model with various receptive fields. Through assimilating the intrinsic perceptual information from the learned guidance map, richer and more realistic textures are generated. Extensive experiments on real low-light images demonstrate the effectiveness of our method, which delivers superior results quantitatively and qualitatively. The code is available at https://github.com/lingyzhu0101/GEMSC to facilitate future research.</description><subject>Adaptation models</subject><subject>Artificial neural networks</subject><subject>Context</subject><subject>contextual feature</subject><subject>Degradation</subject><subject>Feature extraction</subject><subject>guidance map</subject><subject>Histograms</subject><subject>Image acquisition</subject><subject>Image color analysis</subject><subject>Image contrast</subject><subject>Image edge detection</subject><subject>Image enhancement</subject><subject>Lighting</subject><subject>Low visibility</subject><subject>Low-light image enhancement</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kEFOwzAQRS0EEqVwAdhYYp3isTOJvUSllKJILCiwtFzHbl01TklSQW9PShGr-SP9NyM9Qq6BjQCYupuPX9_nI844HwlIs1zACRkAokw4Z3jaZ4aQSA54Ti7ads0YpDLNB-R5EjdhuepcDHFJi_orKQ4rnVVm6Vr6EboVfdhHUwVLp7tQmmgd9XVDx3Xs3HdHJ7EJdlW52F2SM282rbv6m0Py9jiZj5-S4mU6G98XieUKu8QI9AxZJqUrYYHpgqelV47ZhfQSEDC3zhiRZanMGaIVyjteovFOAQjpxZDcHu9um_pz59pOr-tdE_uXmmcqxxwylH2LH1u2qdu2cV5vm1CZZq-B6YMz_etMH5zpP2c9dHOEgnPuH1CZEkoI8QO7qWel</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Zhu, Lingyu</creator><creator>Yang, Wenhan</creator><creator>Chen, Baoliang</creator><creator>Lu, Fangbo</creator><creator>Wang, Shiqi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-3583-959X</orcidid><orcidid>https://orcid.org/0000-0001-7608-7913</orcidid><orcidid>https://orcid.org/0000-0003-4884-6956</orcidid></search><sort><creationdate>20220801</creationdate><title>Enlightening Low-Light Images With Dynamic Guidance for Context Enrichment</title><author>Zhu, Lingyu ; Yang, Wenhan ; Chen, Baoliang ; Lu, Fangbo ; Wang, Shiqi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-a35f050688ed1b54b24df9e0cb8f815157ceaa366487055c39fe2d5afe91138f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptation models</topic><topic>Artificial neural networks</topic><topic>Context</topic><topic>contextual feature</topic><topic>Degradation</topic><topic>Feature extraction</topic><topic>guidance map</topic><topic>Histograms</topic><topic>Image acquisition</topic><topic>Image color analysis</topic><topic>Image contrast</topic><topic>Image edge detection</topic><topic>Image enhancement</topic><topic>Lighting</topic><topic>Low visibility</topic><topic>Low-light image enhancement</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhu, Lingyu</creatorcontrib><creatorcontrib>Yang, Wenhan</creatorcontrib><creatorcontrib>Chen, Baoliang</creatorcontrib><creatorcontrib>Lu, Fangbo</creatorcontrib><creatorcontrib>Wang, Shiqi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhu, Lingyu</au><au>Yang, Wenhan</au><au>Chen, Baoliang</au><au>Lu, Fangbo</au><au>Wang, Shiqi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enlightening Low-Light Images With Dynamic Guidance for Context Enrichment</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2022-08-01</date><risdate>2022</risdate><volume>32</volume><issue>8</issue><spage>5068</spage><epage>5079</epage><pages>5068-5079</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g. , low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts ( e.g ., noise in smooth regions, over-exposure in well-exposed regions and low contrast around edges) cast major challenges to the low-light image enhancement. Herein, we propose a new methodology by imposing a learnable guidance map from the signal and deep priors, making the deep neural network adaptively enhance low-light images in a region-dependent manner. The enhancement capability of the learnable guidance map is further exploited with the multi-scale dilated context collaboration, leading to contextually enriched feature representations extracted by the model with various receptive fields. Through assimilating the intrinsic perceptual information from the learned guidance map, richer and more realistic textures are generated. Extensive experiments on real low-light images demonstrate the effectiveness of our method, which delivers superior results quantitatively and qualitatively. The code is available at https://github.com/lingyzhu0101/GEMSC to facilitate future research.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2022.3146731</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-3583-959X</orcidid><orcidid>https://orcid.org/0000-0001-7608-7913</orcidid><orcidid>https://orcid.org/0000-0003-4884-6956</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2022-08, Vol.32 (8), p.5068-5079
issn 1051-8215
1558-2205
language eng
recordid cdi_ieee_primary_9693933
source IEEE Electronic Library (IEL)
subjects Adaptation models
Artificial neural networks
Context
contextual feature
Degradation
Feature extraction
guidance map
Histograms
Image acquisition
Image color analysis
Image contrast
Image edge detection
Image enhancement
Lighting
Low visibility
Low-light image enhancement
title Enlightening Low-Light Images With Dynamic Guidance for Context Enrichment
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T08%3A26%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enlightening%20Low-Light%20Images%20With%20Dynamic%20Guidance%20for%20Context%20Enrichment&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Zhu,%20Lingyu&rft.date=2022-08-01&rft.volume=32&rft.issue=8&rft.spage=5068&rft.epage=5079&rft.pages=5068-5079&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2022.3146731&rft_dat=%3Cproquest_RIE%3E2697571658%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2697571658&rft_id=info:pmid/&rft_ieee_id=9693933&rfr_iscdi=true