Latent Representation Learning Model for Multi-Band Images Fusion via Low-Rank and Sparse Embedding

The fusion of multi-band images including far-infrared image (FIRI), near-infrared image (NIRI), and visible image (VISI) primarily suffers from four challenges. One is the problem of simultaneous fusion for multiple images. Most existing methods are oriented towards the fusion of two objects, which...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2021, Vol.23, p.3137-3152
Hauptverfasser: Wang, Bin, Niu, Huifang, Zeng, Jianchao, Bai, Guifeng, Lin, Suzhen, Wang, Yanbo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The fusion of multi-band images including far-infrared image (FIRI), near-infrared image (NIRI), and visible image (VISI) primarily suffers from four challenges. One is the problem of simultaneous fusion for multiple images. Most existing methods are oriented towards the fusion of two objects, which is generally achieved with a sequential fusion method. This means that intermediate fusion results are repeatedly integrated with the unprocessed images until all images have been fused. However, this may amplify the blurring effect, and even engender artifacts. Second, consistent training labels for image fusion cannot currently be obtained for some types of images (e.g., medical images, and multi-band images), which may lead to the failed application of supervised learning methods. Third, the existing methods often do not directly focus on the potential mapping relationship between the original, and resulting images, which usually increases the unpredictability of the fusion results. Fourth, redundant features or singularities are often not eliminated in the general fusion process, and both may interfere with or even obscure significant features in the source images. To address the abovementioned problems, this paper proposes a latent representation learning model that can synchronously integrate multi-band images without samples. Specifically, the model can capture the clean, and distinctive features of the originals via latent low-rank, and sparse embedding. The extracted intrinsic features are projected onto the target fusion space through an assumed mapping relationship. The final results were obtained through the designed optimization algorithm. In addition, numerous experiments were implemented to prove the rationality, and feasibility of the proposed fusion model with subjective evaluation, objective indexes, and convergence analysis.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2020.3020695