Unsupervised Decomposition and Correction Network for Low-Light Image Enhancement
Vision-based intelligent driving assistance systems and transportation systems can be improved by enhancing the visibility of the scenes captured in extremely challenging conditions. In particular, many low-image image enhancement (LIE) algorithms have been proposed to facilitate such applications i...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on intelligent transportation systems 2022-10, Vol.23 (10), p.19440-19455 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vision-based intelligent driving assistance systems and transportation systems can be improved by enhancing the visibility of the scenes captured in extremely challenging conditions. In particular, many low-image image enhancement (LIE) algorithms have been proposed to facilitate such applications in low-light conditions. While deep learning-based methods have achieved substantial success in this field, most of them require paired training data, which is difficult to be collected. This paper advocates a novel Unsupervised Decomposition and Correction Network (UDCN) for LIE without depending on paired data for training. Inspired by the Retinex model, our method first decomposes images into illumination and reflectance components with an image decomposition network (IDN). Then, the decomposed illumination is processed by an illumination correction network (ICN) and fused with the reflectance to generate a primary enhanced result. In contrast with fully supervised learning approaches, UDCN is an unsupervised one which is trained only with low-light images and corresponding histogram equalized (HE) counterparts (can be derived from the low-light image itself) as input. Both the decomposition and correction networks are optimized under the guidance of hybrid no-reference quality-aware losses and inter-consistency constraints between the low-light image and its HE counterpart. In addition, we also utilize an unsupervised noise removal network (NRN) to remove the noise previously hidden in the darkness for further improving the primary result. Qualitative and quantitative comparison results are reported to demonstrate the efficacy of UDCN and its superiority over several representative alternatives in the literature. The results and code will be made public available at https://github.com/myd945/UDCN . |
---|---|
ISSN: | 1524-9050 1558-0016 |
DOI: | 10.1109/TITS.2022.3165176 |