A novel fusion method for infrared and visible images under poor illumination conditions

•A novel filter, RF-RGF, is proposed to decompose the source images.•A new non-linear function-based fusion rule is designed to fuse small-scale detail layers.•A new fusion rule based on WSSV is constructed to fuse the large-scale detail layers.•The BIMEF and MBD obtained by selective rules are adop...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Infrared physics & technology 2023-09, Vol.133, p.104773, Article 104773
Hauptverfasser: Li, Zhijian, Yang, Fengbao, Ji, Linna
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A novel filter, RF-RGF, is proposed to decompose the source images.•A new non-linear function-based fusion rule is designed to fuse small-scale detail layers.•A new fusion rule based on WSSV is constructed to fuse the large-scale detail layers.•The BIMEF and MBD obtained by selective rules are adopted to improve the contrast and sharpness of the fused images and enhance targets. Most infrared and visible image fusion methods are designed based on the fact that visible images have rich scene information and more details like edges and textures than infrared images, while infrared images have prominent thermal target information. However, under poor illumination conditions, most areas of visible images are dark, may contain much noise, and lack the corresponding detail information compared with infrared images. As a result, fused images of those methods suffer from information loss, low contrast, and non-obvious targets. To solve this problem, we propose a novel fusion method. Firstly, an improved rolling guidance filter, named RF-RGF, is proposed to decompose source images into small-scale detail, large-scale detail, and base layers. Secondly, for the fusion of small-scale detail layers, a new nonlinear function-based rule is proposed to transfer more texture information from source images under poor illumination conditions to the fused images. For the fusion of large-scale detail layers, a novel fusion rule based on the weighted sum of support values (WSSV) is constructed to retain details effectively. Then, for the fusion of base layers, the rule based on the visual saliency map (VSM) is adopted to ensure high contrast and a well overall look of the fused image. Moreover, BIMEF and morphological bright and dark details (MBD) are used to further enhance the fused image’s contrast and details, making targets more obvious. Specifically, BIMEF is adopted to enhance the visible image before decomposition. The MBD obtained by two selective rules based on morphological top- and bottom-transformations (MTB) is used to enhance the base layer. Experimental results show that the fusion performance of the proposed method is better than other methods (including some state-of-the-art methods), especially in artifact suppression, information retention, contrast improvement, and target enhancement.
ISSN:1350-4495
1879-0275
DOI:10.1016/j.infrared.2023.104773