Image Intrinsic Components Guided Conditional Diffusion Model for Low-Light Image Enhancement

Through formulating the image restoration as a generation problem, the conditional diffusion model has been applied to low-light image enhancement (LIE) to restore the details in dark regions. However, in the previous diffusion model based LIE methods, the conditions used for guiding generation are...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2024-12, Vol.34 (12), p.13244-13256
Hauptverfasser: Kang, Sicong, Gao, Shuaibo, Wu, Wenhui, Wang, Xu, Wang, Shuoyao, Qiu, Guoping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Through formulating the image restoration as a generation problem, the conditional diffusion model has been applied to low-light image enhancement (LIE) to restore the details in dark regions. However, in the previous diffusion model based LIE methods, the conditions used for guiding generation are degraded images, such as low-light image, signal-to-noise ratio map and color map, which suffer from severe degradation and are simply fed into diffusion model by rigidly concatenating with the noise. To avoid using degraded conditions resulting in sub-optimal performance in recovering details and enhancing brightness, we use the image intrinsic components originating from the Retinex model as guidance, whose multi-scale features are flexibly integrated into the diffusion model, and propose a novel conditional diffusion model for LIE. Specifically, the input low-light image is decomposed into reflectance and illumination by a Retinex decomposition module, where two components contain abundant physical property and lighting conditions of the scene. Then, we extract the latent features from two conditions through a component-dependent feature extraction module, which is designed according to the physical property of components. Finally, instead of previous rigid concatenation manner, a well-designed feature fusion mechanism is equipped to adaptively embed generative conditions into diffusion model. Extensive experimental results demonstrate that our method outperforms the state-of-the-art methods, and is capable of effectively restoring the local details while brightening the dark regions. Our codes are available at https://github.com/Knossosc/ICCDiff .
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2024.3441713