Advancing infrared and visible image fusion with an enhanced multiscale encoder and attention-based networks

Infrared and visible image fusion aims to produce images that highlight key targets and offer distinct textures, by merging the thermal radiation infrared images with the detailed texture visible images. Traditional auto encoder-decoder-based fusion methods often rely on manually designed fusion str...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:iScience 2024-10, Vol.27 (10), p.110915, Article 110915
Hauptverfasser: Wang, Jiashuo, Chen, Yong, Sun, Xiaoyun, Xing, Hui, Zhang, Fan, Song, Shiji, Yu, Shuyong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Infrared and visible image fusion aims to produce images that highlight key targets and offer distinct textures, by merging the thermal radiation infrared images with the detailed texture visible images. Traditional auto encoder-decoder-based fusion methods often rely on manually designed fusion strategies, which lack flexibility across different scenarios. Addressing this limitation, we introduce EMAFusion, a fusion approach featuring an enhanced multiscale encoder and a learnable, lightweight fusion network. Our method incorporates skip connections, the convolutional block attention module (CBAM), and nest architecture within the auto encoder-decoder framework to adeptly extract and preserve multiscale features for fusion tasks. Furthermore, a fusion network driven by spatial and channel attention mechanisms is proposed, designed to precisely capture and integrate essential features from both image types. Comprehensive evaluations of the TNO image fusion dataset affirm the proposed method’s superiority over existing state-of-the-art techniques, demonstrating its potential for advancing infrared and visible image fusion. [Display omitted] •A fusion approach with a multiscale encoder and attention-based networks was proposed•Learnable fusion network avoided the manually designed fusion strategies•Feature loss function was designed to preserve more texture and salient features•The proposed method achieved improved performance over existing state-of-the-art fusion methods Applied sciences; Engineering.
ISSN:2589-0042
2589-0042
DOI:10.1016/j.isci.2024.110915