Candros optimization algorithm based dual attention LieNet model for low light image enhancement
The images taken in the low-light environment appear dark and possess low visual quality due to inadequate light exposure, which influences the image view, affecting the oriented applications. The quality enhancement of the low-light image plays a significant role in multi-media applications and ima...
Gespeichert in:
Veröffentlicht in: | Signal, image and video processing image and video processing, 2024-08, Vol.18 (6-7), p.5281-5299 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The images taken in the low-light environment appear dark and possess low visual quality due to inadequate light exposure, which influences the image view, affecting the oriented applications. The quality enhancement of the low-light image plays a significant role in multi-media applications and image processing. Numerous existing methods attempt to handle the issues of low-light images through advanced learning techniques. Yet, these existing methods fail to provide complete contextual information in the image. Hence, a Candros optimization-based dual attention network is proposed in this research for image enhancement to ensure its applicability in the medical field. A dual attention LieNet is constructed using the channel and position attention modules for extracting the relevant features to support the image enhancement and for managing the computational complexity, the Candors optimization is developed, which all together increases the quality of the image. Further, the Candros optimization algorithm determines the optimal fusion parameter for establishing the final enhanced image. The experimental outcomes reveal the dominance of the proposed image enhancement model, which acquired the peak signal to noise ratio of 37.44 dB and similarity index measure of 0.893, visual information fidelity of 0.869, feature similarity of 0.934, and visual saliency-induced index of 0.924 for the LID data set without noise. |
---|---|
ISSN: | 1863-1703 1863-1711 |
DOI: | 10.1007/s11760-024-03232-y |