Bridging the Gap Between Low-Light Raw and RGB Image Enhancement Using Domain Adversarial Transfer Network

Images captured under low-light conditions often suffer from a low signal-to-noise ratio (SNR) caused by low photon count, making the low-light image enhancement (LLIE) task very challenging. In this work, we contribute to the LLIE task in two dimensions. First, we propose a plug-and-play informatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2024-07, Vol.24 (13), p.20868-20883
Hauptverfasser: Tang, Pengliang, Pei, Jiangbo, Han, Jianan, Men, Aidong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Images captured under low-light conditions often suffer from a low signal-to-noise ratio (SNR) caused by low photon count, making the low-light image enhancement (LLIE) task very challenging. In this work, we contribute to the LLIE task in two dimensions. First, we propose a plug-and-play information integration-and-diffusion (InD) module to address the detail and color reconstruction problems of existing methods for supervised single-format LLIE (i.e., Raw or RGB format). The InD module uses carefully designed matrix multiplications to efficiently extract features that integrate global and pixel-level information. On top of this, we build a novel cross-format unsupervised domain adaptation (CUDA) framework to bridge the domain gap and tackle the unsupervised RGB format LLIE task by fully leveraging the Raw priors inherent in the pretrained Raw domain networks. Specifically, in the first stage, we train an RGB-to-Raw format conversion network to eliminate the format differences. Then, an unsupervised domain adversarial transfer network (DATN) is employed to decrease the feature distance between the target domain (RGB domain) data and the source domain (Raw domain) data. At last, the domain transferred low-light images are enhanced by the pretrained source domain network. Comprehensive experimental results show that the networks equipped with our InD modules outperform state-of-the-art supervised LLIE approaches on both RGB and Raw datasets. Moreover, our CUDA framework also achieves state-of-the-art unsupervised results on RGB datasets.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2024.3396195