An end-to-end based on semantic region guidance for infrared and visible image fusion
The goal of infrared and visible image fusion is to fuse the dominant regions in the images of the two modalities to generate high-quality fused image. However, existing methods still suffer from some shortcomings, such as lack of effective supervision information, slow computation due to complex fu...
Gespeichert in:
Veröffentlicht in: | Signal, image and video processing image and video processing, 2024-02, Vol.18 (1), p.295-303 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The goal of infrared and visible image fusion is to fuse the dominant regions in the images of the two modalities to generate high-quality fused image. However, existing methods still suffer from some shortcomings, such as lack of effective supervision information, slow computation due to complex fusion rules, and difficult convergence of GAN-based models. In this paper, we propose an end-to-end fusion method based on semantic region guidance. Our model contains three basic parts: preprocessing module, image generation module, and semantic guided information quantity discrimination module (IQDM). Firstly, we input the infrared and visible images into the preprocessing module to achieve the preliminary fusion of the images. Subsequently, the features are fed into the image generation module for high-quality fused image generation. Finally, the training of the model was supervised by the semantic guided IQDM. In particular, we improve the image generation module based on the diffusion model, which effectively avoids the design of complex fusion rules and makes it more suitable for image fusion tasks. We conduct objective and subjective experiments on four public datasets. Compared with existing methods, the fusion results of the proposed method have better objective metrics, contain more detailed information. |
---|---|
ISSN: | 1863-1703 1863-1711 |
DOI: | 10.1007/s11760-023-02748-z |