Learning multi-granularity semantic interactive representation for joint low-light image enhancement and super-resolution
Images captured in challenging conditions often suffer from the co-existence of low contrast and low resolution. However, most joint enhancement methods focus on fitting a direct mapping from degraded images to high-quality images, which proves insufficient to handle complex degradation. To mitigate...
Gespeichert in:
Veröffentlicht in: | Information fusion 2024-10, Vol.110, p.102467, Article 102467 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Images captured in challenging conditions often suffer from the co-existence of low contrast and low resolution. However, most joint enhancement methods focus on fitting a direct mapping from degraded images to high-quality images, which proves insufficient to handle complex degradation. To mitigate this, we propose a novel semantic prior guided interactive network (MSIRNet) to enable effective image representation learning for joint low-light enhancement and super-resolution. Specifically, a local HE-based domain transfer strategy is developed to remedy the domain gap between low-light images and the recognition scope of a generic segmentation model, thereby obtaining a rich granularity of semantic prior. To represent hybrid-scale features with semantic attributes, we propose a multi-grained semantic progressive interaction module that formulates an omnidirectional blend self-attention mechanism, facilitating deep interaction between diverse semantic knowledge and visual features. Moreover, employing our feature normalized complementary module that perceives context and cross-feature relationships, MSIRNet adaptively integrates image features with the auxiliary visual atoms provided by the Codebook, endowing the model with high-fidelity reconstruction capability. Extensive experiments demonstrate the superior performance of our MSIRNet, showing its ability to restore visually and perceptually pleasing normal-light high-resolution images.
•An interactive network for joint low-light image enhancement and super-resolution.•We propose a local HE-based domain transfer strategy to minimize the domain gap.•An omnidirectional blend attention mechanism for heterogeneous feature interaction.•Integrating visual information flows using a feature normalized complementary module. |
---|---|
ISSN: | 1566-2535 1872-6305 |
DOI: | 10.1016/j.inffus.2024.102467 |