Edge preserving infrared and visible image fusion with three layer decomposition based on multi-level co-occurrence filtering
By merging different portraits of a particular scene, image fusion attempts to create a blended image that combines details from all the images. Infrared (IR) and visible image fusion can be accomplished in a variety of ways, including recent deep-learning-based techniques. However, edge-preserving...
Gespeichert in:
Veröffentlicht in: | Infrared physics & technology 2024-06, Vol.139, p.105336, Article 105336 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | By merging different portraits of a particular scene, image fusion attempts to create a blended image that combines details from all the images. Infrared (IR) and visible image fusion can be accomplished in a variety of ways, including recent deep-learning-based techniques. However, edge-preserving filter (EPF) based fusion works well since it retains all the information from both images. Local filtering-based techniques, on the other hand, limit the fusion performance by introducing multiple gradient reversal artifacts and halos. This work presents an advanced IR and visible image fusion approach depending on three-level decomposition using multi-level co-occurrence filtering, which aims to overcome the common shortfalls such as halo effects seen in existing EPF based fusion. The reference images are decomposed in to base layer, small-scale layers and large-scale layers using multi-level co-occurrence filtering (MLCoF). Since most of the low frequency details are contained in the base layer, the conventional merging strategy by averaging is replaced with novel foreground information map (FIM) based fusion strategy. Small-scale layers are combined by applying max-absolute fusion strategy. A novel weight-map guided edge preserving fusion strategy is put forward for the integration of large-scale layers. Later, fused image is generated by the superposition of these different layers. Subjective visual and objective quantitative analysis shows that the suggested technique attains more notable performance in contrast with other modern fusion methods including many deep-learning techniques. In terms of visual perspective view, the results produced by the proposed approach are superior and include all details from both images. Additionally, it produces outcomes free of gradient reversal and halo artifacts.
•Advanced IR and visible image fusion approach for reducing artifacts depending on three-level decomposition using multi-level co-occurrence filtering.•Filters used in this work helps to preserve all the target and background information.•The base layer is fused by using a foreground information map-based fusion strategy.•Small-scale layers and large-scale layers are fused using seperate fusion strategies. |
---|---|
ISSN: | 1350-4495 1879-0275 |
DOI: | 10.1016/j.infrared.2024.105336 |