Bridging the Gap Between Low-Light Raw and RGB Image Enhancement Using Domain Adversarial Transfer Network

Images captured under low-light conditions often suffer from a low signal-to-noise ratio (SNR) caused by low photon count, making the low-light image enhancement (LLIE) task very challenging. In this work, we contribute to the LLIE task in two dimensions. First, we propose a plug-and-play informatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2024-07, Vol.24 (13), p.20868-20883
Hauptverfasser: Tang, Pengliang, Pei, Jiangbo, Han, Jianan, Men, Aidong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 20883
container_issue 13
container_start_page 20868
container_title IEEE sensors journal
container_volume 24
creator Tang, Pengliang
Pei, Jiangbo
Han, Jianan
Men, Aidong
description Images captured under low-light conditions often suffer from a low signal-to-noise ratio (SNR) caused by low photon count, making the low-light image enhancement (LLIE) task very challenging. In this work, we contribute to the LLIE task in two dimensions. First, we propose a plug-and-play information integration-and-diffusion (InD) module to address the detail and color reconstruction problems of existing methods for supervised single-format LLIE (i.e., Raw or RGB format). The InD module uses carefully designed matrix multiplications to efficiently extract features that integrate global and pixel-level information. On top of this, we build a novel cross-format unsupervised domain adaptation (CUDA) framework to bridge the domain gap and tackle the unsupervised RGB format LLIE task by fully leveraging the Raw priors inherent in the pretrained Raw domain networks. Specifically, in the first stage, we train an RGB-to-Raw format conversion network to eliminate the format differences. Then, an unsupervised domain adversarial transfer network (DATN) is employed to decrease the feature distance between the target domain (RGB domain) data and the source domain (Raw domain) data. At last, the domain transferred low-light images are enhanced by the pretrained source domain network. Comprehensive experimental results show that the networks equipped with our InD modules outperform state-of-the-art supervised LLIE approaches on both RGB and Raw datasets. Moreover, our CUDA framework also achieves state-of-the-art unsupervised results on RGB datasets.
doi_str_mv 10.1109/JSEN.2024.3396195
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JSEN_2024_3396195</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10525682</ieee_id><sourcerecordid>3073299615</sourcerecordid><originalsourceid>FETCH-LOGICAL-c246t-6718ca7aa02c010df429731334ff24949612bc6c14f8b13aeac6a6d476212d423</originalsourceid><addsrcrecordid>eNpNkE1PwkAQhjdGExH9ASYeNvFc3K_utkdAREyDCULirRm221KkW9wtEv-9beDgaebwPu9kHoTuKRlQSuKnt4_JfMAIEwPOY0nj8AL1aBhGAVUiuux2TgLB1ec1uvF-SwiNVah6aDtyZVaUtsDNxuAp7PHINEdjLE7qY5CUxabBCzhisBleTEd4VkFh8MRuwGpTGdvgle_o57qC0uJh9mOcB1fCDi8dWJ8bh-dtY-2-btFVDjtv7s6zj1Yvk-X4NUjep7PxMAk0E7IJpKKRBgVAmCaUZLlgseKUc5HnTMSi_Y6ttdRU5NGacjCgJchMKMkoywTjffR46t27-vtgfJNu64Oz7cmUE8VZ3DaEbYqeUtrV3juTp3tXVuB-U0rSTmnaKU07pelZacs8nJjSGPMvH7JQRoz_AdbKcUU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3073299615</pqid></control><display><type>article</type><title>Bridging the Gap Between Low-Light Raw and RGB Image Enhancement Using Domain Adversarial Transfer Network</title><source>IEEE Electronic Library (IEL)</source><creator>Tang, Pengliang ; Pei, Jiangbo ; Han, Jianan ; Men, Aidong</creator><creatorcontrib>Tang, Pengliang ; Pei, Jiangbo ; Han, Jianan ; Men, Aidong</creatorcontrib><description>Images captured under low-light conditions often suffer from a low signal-to-noise ratio (SNR) caused by low photon count, making the low-light image enhancement (LLIE) task very challenging. In this work, we contribute to the LLIE task in two dimensions. First, we propose a plug-and-play information integration-and-diffusion (InD) module to address the detail and color reconstruction problems of existing methods for supervised single-format LLIE (i.e., Raw or RGB format). The InD module uses carefully designed matrix multiplications to efficiently extract features that integrate global and pixel-level information. On top of this, we build a novel cross-format unsupervised domain adaptation (CUDA) framework to bridge the domain gap and tackle the unsupervised RGB format LLIE task by fully leveraging the Raw priors inherent in the pretrained Raw domain networks. Specifically, in the first stage, we train an RGB-to-Raw format conversion network to eliminate the format differences. Then, an unsupervised domain adversarial transfer network (DATN) is employed to decrease the feature distance between the target domain (RGB domain) data and the source domain (Raw domain) data. At last, the domain transferred low-light images are enhanced by the pretrained source domain network. Comprehensive experimental results show that the networks equipped with our InD modules outperform state-of-the-art supervised LLIE approaches on both RGB and Raw datasets. Moreover, our CUDA framework also achieves state-of-the-art unsupervised results on RGB datasets.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2024.3396195</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adversarial learning ; cross-format unsupervised domain adaptation (CUDA) ; Datasets ; Feature extraction ; Format ; global information extraction ; Graphics processing units ; Image color analysis ; Image enhancement ; Image reconstruction ; low-light image enhancement (LLIE) ; Modules ; Sensors ; Signal to noise ratio ; Task analysis</subject><ispartof>IEEE sensors journal, 2024-07, Vol.24 (13), p.20868-20883</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c246t-6718ca7aa02c010df429731334ff24949612bc6c14f8b13aeac6a6d476212d423</cites><orcidid>0009-0008-9519-0092 ; 0000-0003-4653-6022 ; 0000-0001-6168-9276 ; 0000-0002-5996-2701</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10525682$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27923,27924,54757</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10525682$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Tang, Pengliang</creatorcontrib><creatorcontrib>Pei, Jiangbo</creatorcontrib><creatorcontrib>Han, Jianan</creatorcontrib><creatorcontrib>Men, Aidong</creatorcontrib><title>Bridging the Gap Between Low-Light Raw and RGB Image Enhancement Using Domain Adversarial Transfer Network</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>Images captured under low-light conditions often suffer from a low signal-to-noise ratio (SNR) caused by low photon count, making the low-light image enhancement (LLIE) task very challenging. In this work, we contribute to the LLIE task in two dimensions. First, we propose a plug-and-play information integration-and-diffusion (InD) module to address the detail and color reconstruction problems of existing methods for supervised single-format LLIE (i.e., Raw or RGB format). The InD module uses carefully designed matrix multiplications to efficiently extract features that integrate global and pixel-level information. On top of this, we build a novel cross-format unsupervised domain adaptation (CUDA) framework to bridge the domain gap and tackle the unsupervised RGB format LLIE task by fully leveraging the Raw priors inherent in the pretrained Raw domain networks. Specifically, in the first stage, we train an RGB-to-Raw format conversion network to eliminate the format differences. Then, an unsupervised domain adversarial transfer network (DATN) is employed to decrease the feature distance between the target domain (RGB domain) data and the source domain (Raw domain) data. At last, the domain transferred low-light images are enhanced by the pretrained source domain network. Comprehensive experimental results show that the networks equipped with our InD modules outperform state-of-the-art supervised LLIE approaches on both RGB and Raw datasets. Moreover, our CUDA framework also achieves state-of-the-art unsupervised results on RGB datasets.</description><subject>Adversarial learning</subject><subject>cross-format unsupervised domain adaptation (CUDA)</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Format</subject><subject>global information extraction</subject><subject>Graphics processing units</subject><subject>Image color analysis</subject><subject>Image enhancement</subject><subject>Image reconstruction</subject><subject>low-light image enhancement (LLIE)</subject><subject>Modules</subject><subject>Sensors</subject><subject>Signal to noise ratio</subject><subject>Task analysis</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkE1PwkAQhjdGExH9ASYeNvFc3K_utkdAREyDCULirRm221KkW9wtEv-9beDgaebwPu9kHoTuKRlQSuKnt4_JfMAIEwPOY0nj8AL1aBhGAVUiuux2TgLB1ec1uvF-SwiNVah6aDtyZVaUtsDNxuAp7PHINEdjLE7qY5CUxabBCzhisBleTEd4VkFh8MRuwGpTGdvgle_o57qC0uJh9mOcB1fCDi8dWJ8bh-dtY-2-btFVDjtv7s6zj1Yvk-X4NUjep7PxMAk0E7IJpKKRBgVAmCaUZLlgseKUc5HnTMSi_Y6ttdRU5NGacjCgJchMKMkoywTjffR46t27-vtgfJNu64Oz7cmUE8VZ3DaEbYqeUtrV3juTp3tXVuB-U0rSTmnaKU07pelZacs8nJjSGPMvH7JQRoz_AdbKcUU</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Tang, Pengliang</creator><creator>Pei, Jiangbo</creator><creator>Han, Jianan</creator><creator>Men, Aidong</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0009-0008-9519-0092</orcidid><orcidid>https://orcid.org/0000-0003-4653-6022</orcidid><orcidid>https://orcid.org/0000-0001-6168-9276</orcidid><orcidid>https://orcid.org/0000-0002-5996-2701</orcidid></search><sort><creationdate>20240701</creationdate><title>Bridging the Gap Between Low-Light Raw and RGB Image Enhancement Using Domain Adversarial Transfer Network</title><author>Tang, Pengliang ; Pei, Jiangbo ; Han, Jianan ; Men, Aidong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c246t-6718ca7aa02c010df429731334ff24949612bc6c14f8b13aeac6a6d476212d423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adversarial learning</topic><topic>cross-format unsupervised domain adaptation (CUDA)</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Format</topic><topic>global information extraction</topic><topic>Graphics processing units</topic><topic>Image color analysis</topic><topic>Image enhancement</topic><topic>Image reconstruction</topic><topic>low-light image enhancement (LLIE)</topic><topic>Modules</topic><topic>Sensors</topic><topic>Signal to noise ratio</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tang, Pengliang</creatorcontrib><creatorcontrib>Pei, Jiangbo</creatorcontrib><creatorcontrib>Han, Jianan</creatorcontrib><creatorcontrib>Men, Aidong</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Tang, Pengliang</au><au>Pei, Jiangbo</au><au>Han, Jianan</au><au>Men, Aidong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Bridging the Gap Between Low-Light Raw and RGB Image Enhancement Using Domain Adversarial Transfer Network</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2024-07-01</date><risdate>2024</risdate><volume>24</volume><issue>13</issue><spage>20868</spage><epage>20883</epage><pages>20868-20883</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>Images captured under low-light conditions often suffer from a low signal-to-noise ratio (SNR) caused by low photon count, making the low-light image enhancement (LLIE) task very challenging. In this work, we contribute to the LLIE task in two dimensions. First, we propose a plug-and-play information integration-and-diffusion (InD) module to address the detail and color reconstruction problems of existing methods for supervised single-format LLIE (i.e., Raw or RGB format). The InD module uses carefully designed matrix multiplications to efficiently extract features that integrate global and pixel-level information. On top of this, we build a novel cross-format unsupervised domain adaptation (CUDA) framework to bridge the domain gap and tackle the unsupervised RGB format LLIE task by fully leveraging the Raw priors inherent in the pretrained Raw domain networks. Specifically, in the first stage, we train an RGB-to-Raw format conversion network to eliminate the format differences. Then, an unsupervised domain adversarial transfer network (DATN) is employed to decrease the feature distance between the target domain (RGB domain) data and the source domain (Raw domain) data. At last, the domain transferred low-light images are enhanced by the pretrained source domain network. Comprehensive experimental results show that the networks equipped with our InD modules outperform state-of-the-art supervised LLIE approaches on both RGB and Raw datasets. Moreover, our CUDA framework also achieves state-of-the-art unsupervised results on RGB datasets.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2024.3396195</doi><tpages>16</tpages><orcidid>https://orcid.org/0009-0008-9519-0092</orcidid><orcidid>https://orcid.org/0000-0003-4653-6022</orcidid><orcidid>https://orcid.org/0000-0001-6168-9276</orcidid><orcidid>https://orcid.org/0000-0002-5996-2701</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1530-437X
ispartof IEEE sensors journal, 2024-07, Vol.24 (13), p.20868-20883
issn 1530-437X
1558-1748
language eng
recordid cdi_crossref_primary_10_1109_JSEN_2024_3396195
source IEEE Electronic Library (IEL)
subjects Adversarial learning
cross-format unsupervised domain adaptation (CUDA)
Datasets
Feature extraction
Format
global information extraction
Graphics processing units
Image color analysis
Image enhancement
Image reconstruction
low-light image enhancement (LLIE)
Modules
Sensors
Signal to noise ratio
Task analysis
title Bridging the Gap Between Low-Light Raw and RGB Image Enhancement Using Domain Adversarial Transfer Network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T09%3A56%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Bridging%20the%20Gap%20Between%20Low-Light%20Raw%20and%20RGB%20Image%20Enhancement%20Using%20Domain%20Adversarial%20Transfer%20Network&rft.jtitle=IEEE%20sensors%20journal&rft.au=Tang,%20Pengliang&rft.date=2024-07-01&rft.volume=24&rft.issue=13&rft.spage=20868&rft.epage=20883&rft.pages=20868-20883&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2024.3396195&rft_dat=%3Cproquest_RIE%3E3073299615%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3073299615&rft_id=info:pmid/&rft_ieee_id=10525682&rfr_iscdi=true