Why mamba is effective? Exploit Linear Transformer-Mamba Network for Multi-Modality Image Fusion
Multi-modality image fusion aims to integrate the merits of images from different sources and render high-quality fusion images. However, existing feature extraction and fusion methods are either constrained by inherent local reduction bias and static parameters during inference (CNN) or limited by...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-modality image fusion aims to integrate the merits of images from
different sources and render high-quality fusion images. However, existing
feature extraction and fusion methods are either constrained by inherent local
reduction bias and static parameters during inference (CNN) or limited by
quadratic computational complexity (Transformers), and cannot effectively
extract and fuse features. To solve this problem, we propose a dual-branch
image fusion network called Tmamba. It consists of linear Transformer and
Mamba, which has global modeling capabilities while maintaining linear
complexity. Due to the difference between the Transformer and Mamba structures,
the features extracted by the two branches carry channel and position
information respectively. T-M interaction structure is designed between the two
branches, using global learnable parameters and convolutional layers to
transfer position and channel information respectively. We further propose
cross-modal interaction at the attention level to obtain cross-modal attention.
Experiments show that our Tmamba achieves promising results in multiple fusion
tasks, including infrared-visible image fusion and medical image fusion. Code
with checkpoints will be available after the peer-review process. |
---|---|
DOI: | 10.48550/arxiv.2409.03223 |