Image Intrinsic Components Guided Conditional Diffusion Model for Low-Light Image Enhancement

Through formulating the image restoration as a generation problem, the conditional diffusion model has been applied to low-light image enhancement (LIE) to restore the details in dark regions. However, in the previous diffusion model based LIE methods, the conditions used for guiding generation are...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2024-12, Vol.34 (12), p.13244-13256
Hauptverfasser: Kang, Sicong, Gao, Shuaibo, Wu, Wenhui, Wang, Xu, Wang, Shuoyao, Qiu, Guoping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 13256
container_issue 12
container_start_page 13244
container_title IEEE transactions on circuits and systems for video technology
container_volume 34
creator Kang, Sicong
Gao, Shuaibo
Wu, Wenhui
Wang, Xu
Wang, Shuoyao
Qiu, Guoping
description Through formulating the image restoration as a generation problem, the conditional diffusion model has been applied to low-light image enhancement (LIE) to restore the details in dark regions. However, in the previous diffusion model based LIE methods, the conditions used for guiding generation are degraded images, such as low-light image, signal-to-noise ratio map and color map, which suffer from severe degradation and are simply fed into diffusion model by rigidly concatenating with the noise. To avoid using degraded conditions resulting in sub-optimal performance in recovering details and enhancing brightness, we use the image intrinsic components originating from the Retinex model as guidance, whose multi-scale features are flexibly integrated into the diffusion model, and propose a novel conditional diffusion model for LIE. Specifically, the input low-light image is decomposed into reflectance and illumination by a Retinex decomposition module, where two components contain abundant physical property and lighting conditions of the scene. Then, we extract the latent features from two conditions through a component-dependent feature extraction module, which is designed according to the physical property of components. Finally, instead of previous rigid concatenation manner, a well-designed feature fusion mechanism is equipped to adaptively embed generative conditions into diffusion model. Extensive experimental results demonstrate that our method outperforms the state-of-the-art methods, and is capable of effectively restoring the local details while brightening the dark regions. Our codes are available at https://github.com/Knossosc/ICCDiff .
doi_str_mv 10.1109/TCSVT.2024.3441713
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_3147528513</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10633292</ieee_id><sourcerecordid>3147528513</sourcerecordid><originalsourceid>FETCH-LOGICAL-c177t-5f4a7e7d5b1f3085c73c113415a4df9cf69f2ad8bfe928c893c33e0d36b6b91d3</originalsourceid><addsrcrecordid>eNpNkEtLw0AUhYMoWKt_QFwEXKfOnUcyWUrUWqi4sLqTYTKPdkqTqTMJ4r83NV24uofLOYfDlyTXgGYAqLxbVW8fqxlGmM4IpVAAOUkmwBjPMEbsdNCIQcYxsPPkIsYtQkA5LSbJ56KRa5Mu2i64NjqVVr7Z-9a0XUznvdNGD59Wu875Vu7SB2dtHwedvnhtdqn1IV3672zp1psuHbse241slWmGjsvkzMpdNFfHO03enx5X1XO2fJ0vqvtlpqAouoxZKgtTaFaDJYgzVRAFQCgwSbUtlc1Li6XmtTUl5oqXRBFikCZ5ndclaDJNbsfeffBfvYmd2Po-DIOjIEALhjkDMrjw6FLBxxiMFfvgGhl-BCBxwCj-MIoDRnHEOIRuxpAzxvwL5ITgEpNf4ytvKA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3147528513</pqid></control><display><type>article</type><title>Image Intrinsic Components Guided Conditional Diffusion Model for Low-Light Image Enhancement</title><source>IEEE Electronic Library (IEL)</source><creator>Kang, Sicong ; Gao, Shuaibo ; Wu, Wenhui ; Wang, Xu ; Wang, Shuoyao ; Qiu, Guoping</creator><creatorcontrib>Kang, Sicong ; Gao, Shuaibo ; Wu, Wenhui ; Wang, Xu ; Wang, Shuoyao ; Qiu, Guoping</creatorcontrib><description>Through formulating the image restoration as a generation problem, the conditional diffusion model has been applied to low-light image enhancement (LIE) to restore the details in dark regions. However, in the previous diffusion model based LIE methods, the conditions used for guiding generation are degraded images, such as low-light image, signal-to-noise ratio map and color map, which suffer from severe degradation and are simply fed into diffusion model by rigidly concatenating with the noise. To avoid using degraded conditions resulting in sub-optimal performance in recovering details and enhancing brightness, we use the image intrinsic components originating from the Retinex model as guidance, whose multi-scale features are flexibly integrated into the diffusion model, and propose a novel conditional diffusion model for LIE. Specifically, the input low-light image is decomposed into reflectance and illumination by a Retinex decomposition module, where two components contain abundant physical property and lighting conditions of the scene. Then, we extract the latent features from two conditions through a component-dependent feature extraction module, which is designed according to the physical property of components. Finally, instead of previous rigid concatenation manner, a well-designed feature fusion mechanism is equipped to adaptively embed generative conditions into diffusion model. Extensive experimental results demonstrate that our method outperforms the state-of-the-art methods, and is capable of effectively restoring the local details while brightening the dark regions. Our codes are available at https://github.com/Knossosc/ICCDiff .</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2024.3441713</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Brightening ; Dark adaptation ; Decomposition ; diffusion model ; Diffusion models ; Feature extraction ; Heat treating ; Illumination ; Image color analysis ; Image degradation ; Image enhancement ; Image restoration ; Light ; Lighting ; Low-light image enhancement ; Modules ; Photodegradation ; Reflectivity ; retinex decomposition ; Signal generation ; Signal to noise ratio</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2024-12, Vol.34 (12), p.13244-13256</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c177t-5f4a7e7d5b1f3085c73c113415a4df9cf69f2ad8bfe928c893c33e0d36b6b91d3</cites><orcidid>0000-0002-2948-6468 ; 0000-0003-1395-4383 ; 0000-0002-5877-5648 ; 0000-0002-0416-7719</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10633292$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,793,27905,27906,54739</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10633292$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Kang, Sicong</creatorcontrib><creatorcontrib>Gao, Shuaibo</creatorcontrib><creatorcontrib>Wu, Wenhui</creatorcontrib><creatorcontrib>Wang, Xu</creatorcontrib><creatorcontrib>Wang, Shuoyao</creatorcontrib><creatorcontrib>Qiu, Guoping</creatorcontrib><title>Image Intrinsic Components Guided Conditional Diffusion Model for Low-Light Image Enhancement</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Through formulating the image restoration as a generation problem, the conditional diffusion model has been applied to low-light image enhancement (LIE) to restore the details in dark regions. However, in the previous diffusion model based LIE methods, the conditions used for guiding generation are degraded images, such as low-light image, signal-to-noise ratio map and color map, which suffer from severe degradation and are simply fed into diffusion model by rigidly concatenating with the noise. To avoid using degraded conditions resulting in sub-optimal performance in recovering details and enhancing brightness, we use the image intrinsic components originating from the Retinex model as guidance, whose multi-scale features are flexibly integrated into the diffusion model, and propose a novel conditional diffusion model for LIE. Specifically, the input low-light image is decomposed into reflectance and illumination by a Retinex decomposition module, where two components contain abundant physical property and lighting conditions of the scene. Then, we extract the latent features from two conditions through a component-dependent feature extraction module, which is designed according to the physical property of components. Finally, instead of previous rigid concatenation manner, a well-designed feature fusion mechanism is equipped to adaptively embed generative conditions into diffusion model. Extensive experimental results demonstrate that our method outperforms the state-of-the-art methods, and is capable of effectively restoring the local details while brightening the dark regions. Our codes are available at https://github.com/Knossosc/ICCDiff .</description><subject>Brightening</subject><subject>Dark adaptation</subject><subject>Decomposition</subject><subject>diffusion model</subject><subject>Diffusion models</subject><subject>Feature extraction</subject><subject>Heat treating</subject><subject>Illumination</subject><subject>Image color analysis</subject><subject>Image degradation</subject><subject>Image enhancement</subject><subject>Image restoration</subject><subject>Light</subject><subject>Lighting</subject><subject>Low-light image enhancement</subject><subject>Modules</subject><subject>Photodegradation</subject><subject>Reflectivity</subject><subject>retinex decomposition</subject><subject>Signal generation</subject><subject>Signal to noise ratio</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkEtLw0AUhYMoWKt_QFwEXKfOnUcyWUrUWqi4sLqTYTKPdkqTqTMJ4r83NV24uofLOYfDlyTXgGYAqLxbVW8fqxlGmM4IpVAAOUkmwBjPMEbsdNCIQcYxsPPkIsYtQkA5LSbJ56KRa5Mu2i64NjqVVr7Z-9a0XUznvdNGD59Wu875Vu7SB2dtHwedvnhtdqn1IV3672zp1psuHbse241slWmGjsvkzMpdNFfHO03enx5X1XO2fJ0vqvtlpqAouoxZKgtTaFaDJYgzVRAFQCgwSbUtlc1Li6XmtTUl5oqXRBFikCZ5ndclaDJNbsfeffBfvYmd2Po-DIOjIEALhjkDMrjw6FLBxxiMFfvgGhl-BCBxwCj-MIoDRnHEOIRuxpAzxvwL5ITgEpNf4ytvKA</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Kang, Sicong</creator><creator>Gao, Shuaibo</creator><creator>Wu, Wenhui</creator><creator>Wang, Xu</creator><creator>Wang, Shuoyao</creator><creator>Qiu, Guoping</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-2948-6468</orcidid><orcidid>https://orcid.org/0000-0003-1395-4383</orcidid><orcidid>https://orcid.org/0000-0002-5877-5648</orcidid><orcidid>https://orcid.org/0000-0002-0416-7719</orcidid></search><sort><creationdate>20241201</creationdate><title>Image Intrinsic Components Guided Conditional Diffusion Model for Low-Light Image Enhancement</title><author>Kang, Sicong ; Gao, Shuaibo ; Wu, Wenhui ; Wang, Xu ; Wang, Shuoyao ; Qiu, Guoping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c177t-5f4a7e7d5b1f3085c73c113415a4df9cf69f2ad8bfe928c893c33e0d36b6b91d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Brightening</topic><topic>Dark adaptation</topic><topic>Decomposition</topic><topic>diffusion model</topic><topic>Diffusion models</topic><topic>Feature extraction</topic><topic>Heat treating</topic><topic>Illumination</topic><topic>Image color analysis</topic><topic>Image degradation</topic><topic>Image enhancement</topic><topic>Image restoration</topic><topic>Light</topic><topic>Lighting</topic><topic>Low-light image enhancement</topic><topic>Modules</topic><topic>Photodegradation</topic><topic>Reflectivity</topic><topic>retinex decomposition</topic><topic>Signal generation</topic><topic>Signal to noise ratio</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kang, Sicong</creatorcontrib><creatorcontrib>Gao, Shuaibo</creatorcontrib><creatorcontrib>Wu, Wenhui</creatorcontrib><creatorcontrib>Wang, Xu</creatorcontrib><creatorcontrib>Wang, Shuoyao</creatorcontrib><creatorcontrib>Qiu, Guoping</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kang, Sicong</au><au>Gao, Shuaibo</au><au>Wu, Wenhui</au><au>Wang, Xu</au><au>Wang, Shuoyao</au><au>Qiu, Guoping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Image Intrinsic Components Guided Conditional Diffusion Model for Low-Light Image Enhancement</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2024-12-01</date><risdate>2024</risdate><volume>34</volume><issue>12</issue><spage>13244</spage><epage>13256</epage><pages>13244-13256</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Through formulating the image restoration as a generation problem, the conditional diffusion model has been applied to low-light image enhancement (LIE) to restore the details in dark regions. However, in the previous diffusion model based LIE methods, the conditions used for guiding generation are degraded images, such as low-light image, signal-to-noise ratio map and color map, which suffer from severe degradation and are simply fed into diffusion model by rigidly concatenating with the noise. To avoid using degraded conditions resulting in sub-optimal performance in recovering details and enhancing brightness, we use the image intrinsic components originating from the Retinex model as guidance, whose multi-scale features are flexibly integrated into the diffusion model, and propose a novel conditional diffusion model for LIE. Specifically, the input low-light image is decomposed into reflectance and illumination by a Retinex decomposition module, where two components contain abundant physical property and lighting conditions of the scene. Then, we extract the latent features from two conditions through a component-dependent feature extraction module, which is designed according to the physical property of components. Finally, instead of previous rigid concatenation manner, a well-designed feature fusion mechanism is equipped to adaptively embed generative conditions into diffusion model. Extensive experimental results demonstrate that our method outperforms the state-of-the-art methods, and is capable of effectively restoring the local details while brightening the dark regions. Our codes are available at https://github.com/Knossosc/ICCDiff .</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2024.3441713</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-2948-6468</orcidid><orcidid>https://orcid.org/0000-0003-1395-4383</orcidid><orcidid>https://orcid.org/0000-0002-5877-5648</orcidid><orcidid>https://orcid.org/0000-0002-0416-7719</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2024-12, Vol.34 (12), p.13244-13256
issn 1051-8215
1558-2205
language eng
recordid cdi_proquest_journals_3147528513
source IEEE Electronic Library (IEL)
subjects Brightening
Dark adaptation
Decomposition
diffusion model
Diffusion models
Feature extraction
Heat treating
Illumination
Image color analysis
Image degradation
Image enhancement
Image restoration
Light
Lighting
Low-light image enhancement
Modules
Photodegradation
Reflectivity
retinex decomposition
Signal generation
Signal to noise ratio
title Image Intrinsic Components Guided Conditional Diffusion Model for Low-Light Image Enhancement
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T20%3A18%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Image%20Intrinsic%20Components%20Guided%20Conditional%20Diffusion%20Model%20for%20Low-Light%20Image%20Enhancement&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Kang,%20Sicong&rft.date=2024-12-01&rft.volume=34&rft.issue=12&rft.spage=13244&rft.epage=13256&rft.pages=13244-13256&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2024.3441713&rft_dat=%3Cproquest_RIE%3E3147528513%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3147528513&rft_id=info:pmid/&rft_ieee_id=10633292&rfr_iscdi=true