All-weather Multi-Modality Image Fusion: Unified Framework and 100k Benchmark
Multi-modality image fusion (MMIF) combines complementary information from different image modalities to provide a more comprehensive and objective interpretation of scenes. However, existing MMIF methods lack the ability to resist different weather interferences in real-world scenes, preventing the...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multi-modality image fusion (MMIF) combines complementary information from
different image modalities to provide a more comprehensive and objective
interpretation of scenes. However, existing MMIF methods lack the ability to
resist different weather interferences in real-world scenes, preventing them
from being useful in practical applications such as autonomous driving. To
bridge this research gap, we proposed an all-weather MMIF model. Achieving
effective multi-tasking in this context is particularly challenging due to the
complex and diverse nature of weather conditions. A key obstacle lies in the
'black box' nature of current deep learning architectures, which restricts
their multi-tasking capabilities. To overcome this, we decompose the network
into two modules: a fusion module and a restoration module. For the fusion
module, we introduce a learnable low-rank representation model to decompose
images into low-rank and sparse components. This interpretable feature
separation allows us to better observe and understand images. For the
restoration module, we propose a physically-aware clear feature prediction
module based on an atmospheric scattering model that can deduce variations in
light transmittance from both scene illumination and reflectance. We also
construct a large-scale multi-modality dataset with 100,000 image pairs across
rain, haze, and snow conditions, covering various degradation levels and
diverse scenes to thoroughly evaluate image fusion methods in adverse weather.
Experimental results in both real-world and synthetic scenes show that the
proposed algorithm excels in detail recovery and multi-modality feature
extraction. The code is available at https://github.com/ixilai/AWFusion. |
---|---|
DOI: | 10.48550/arxiv.2402.02090 |