Joint transformer progressive self‐calibration network for low light enhancement

When the lighting conditions are poor and the environmental light is weak, the image captured by the imaging device often has lower brightness and is accompanied by a lot of noise. The paper designs a progressive self‐calibration network model (PSCNet) for recovering high‐quality low‐light‐enhanced...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET image processing 2023-04, Vol.17 (5), p.1493-1509
Hauptverfasser: Fan, Junyu, Li, Jinjiang, Hua, Zhen, Fan, Linwei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:When the lighting conditions are poor and the environmental light is weak, the image captured by the imaging device often has lower brightness and is accompanied by a lot of noise. The paper designs a progressive self‐calibration network model (PSCNet) for recovering high‐quality low‐light‐enhanced images. First, shallow features in low‐light images can be better focused and extracted with the help of attention mechanism. Next, the feature mapping is passed to the encoder and decoder modules, where the transformer and encoder‐decoder jump connection structures can be better combined with the semantic information of the context to learn rich deep feature information. Finally, the self‐calibration module can adaptively cascade the features decoded by the decoder and input them into the residual attention module quickly and accurately. Meanwhile, the LBP features of the image are also fused into the feature information of the residual attention module to enhance the detailed texture information of the image. Qualitative analysis and quantitative comparison of a large number of experimental results show that this method outperforms existing methods. A progressive network model (PSCNet) is designed to restore high‐quality enhanced images combined with convolutional neural networks, encoder‐decoder structures, and transformer.
ISSN:1751-9659
1751-9667
DOI:10.1049/ipr2.12732