SMRFnet: Saliency multi-scale residual fusion network for grayscale and pseudo color medical image fusion
•SMRFnet is proposed and applied to the medical image fusion.•Three loss functions are used to optimize the network.•Our objective indicators are superior to those of the reference algorithms.•Our algorithm preserves richer perceptual details and higher color information than the reference algorithm...
Gespeichert in:
Veröffentlicht in: | Biomedical signal processing and control 2025-02, Vol.100, p.107050, Article 107050 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •SMRFnet is proposed and applied to the medical image fusion.•Three loss functions are used to optimize the network.•Our objective indicators are superior to those of the reference algorithms.•Our algorithm preserves richer perceptual details and higher color information than the reference algorithms.
Currently, multimodal medical images are widely used in the medical field, such as surgical planning, remote guidance, and medical teaching. However, the information of single-modal medical images is limited, making it difficult for doctors to obtain information from multiple perspectives and gain a more comprehensive understanding of the patient’s condition. To overcome this difficulty, many multimodal medical image fusion algorithms have been proposed. However, existing fusion algorithms have drawbacks such as weak edge strength, detail loss or color distortion. To overcome these shortcomings, a saliency multi-scale residual fusion network (SMRFnet) is proposed and applied to the fusion of grayscale and pseudo color medical images. Firstly, MRSFnet extracts saliency features through the VGG network. Then, the saliency features are added together to obtain the fusion features. Finally, the fusion features are fed into a multi-scale residual network to decode into the fusion image. The experiment shows that the proposed algorithm preserves more important saliency information and details in the fusion images compared to the reference algorithms. In addition, the proposed algorithm has more details and objective indicators. |
---|---|
ISSN: | 1746-8094 |
DOI: | 10.1016/j.bspc.2024.107050 |