Joint semantic-aware and noise suppression for low-light image enhancement without reference

Digital images captured from the real world are inevitably affected by light and noise. Moreover, the downstream high-level visual tasks, such as the computer vision-based object detection and semantic segmentation can be improved by adjusting the visibility of dark scenes. Although the approaches b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal, image and video processing image and video processing, 2023-10, Vol.17 (7), p.3847-3855
Hauptverfasser: Zhang, Meng, Liu, Lidong, Jiang, Donghua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Digital images captured from the real world are inevitably affected by light and noise. Moreover, the downstream high-level visual tasks, such as the computer vision-based object detection and semantic segmentation can be improved by adjusting the visibility of dark scenes. Although the approaches built upon deep learning have achieved great success in the low-light enhancement field, the significant influence of semantic features and noise is always overlooked. Therefore, a new unsupervised optical enhancement model based on semantic perception and noise suppression is proposed in this paper. First, the enhancement factor mapping is adopted to extract the low-light image features. Then, the progressive curve enhancement is utilized to adjust the curve. Compared with the fully supervised learning method, the well-built network is trained with unpaired images in this paper. Second, under the guidance of semantic feature embedding module, the low-light enhancement can preserve rich semantic information. Additionally, the self-supervised noise removal module is employed to effectively avoid noise interference and elevate image quality. Experimental outcomes and analysis indicate that the proposed scheme can not only generate the enhanced images of visually pleasing and artifact free, but also be applied to multiple downstream visual tasks.
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-023-02613-z