RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement
Low-light images suffer from low contrast and unclear details, which not only reduces the available information for humans but limits the application of computer vision algorithms. Among the existing enhancement techniques, Retinex-based and learning-based methods are under the spotlight today. In t...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2022-03, Vol.32 (3), p.1076-1088 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Low-light images suffer from low contrast and unclear details, which not only reduces the available information for humans but limits the application of computer vision algorithms. Among the existing enhancement techniques, Retinex-based and learning-based methods are under the spotlight today. In this paper, we bridge the gap between the two methods. First, we propose a novel "generative" strategy for Retinex decomposition, by which the decomposition is cast as a generative problem. Second, based on the strategy, a unified deep framework is proposed to estimate the latent components and perform low-light image enhancement. Third, our method can weaken the coupling relationship between the two components while performing Retinex decomposition. Finally, the RetinexDIP performs Retinex decomposition without any external images, and the estimated illumination can be easily adjusted and is used to perform enhancement. The proposed method is compared with ten state-of-the-art algorithms on seven public datasets, and the experimental results demonstrate the superiority of our method. Code is available at: https://github.com/zhaozunjin/RetinexDIP . |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2021.3073371 |