CFDformer: Medical image segmentation based on cross fusion dual attention network

In recent years, with the continuous development of Transformers in the field of images, they have been applied in various fields related to images, such as medical image segmentation. However, traditional self attention mechanisms are mostly quadratic in computational complexity, which is extremely...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Biomedical signal processing and control 2025-03, Vol.101, p.107208, Article 107208
Hauptverfasser: Yang, Zhou, Wang, Hua, Liu, Yepeng, Zhang, Fan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, with the continuous development of Transformers in the field of images, they have been applied in various fields related to images, such as medical image segmentation. However, traditional self attention mechanisms are mostly quadratic in computational complexity, which is extremely unfriendly to model training. At the same time, some related improvements often come with the problem of information loss, resulting in the model’s final results being not ideal. To address these issues, We have designed a medical image segmentation network named CFDformer based on multi-scale cross fusion. Specifically, we have improved the attention mechanism through a multi-scale cross fusion strategy. We have introduced a grouped cross attention mechanism named CCAM to ensure the model’s ability to refine and capture global information and local features. At the same time, we have devised a skip connection strategy, named SCCF, which is based on a mixed attention fusion approach. Our method has been experimentally validated to yield satisfactory results on pertinent datasets. The code will be publicly available on GitHub to facilitate reproducibility and further research. •We propose a novel hybrid attention strategy for medical image segmentation.•We constructed a skip connection based on the idea of hybrid attention.•Our designed network has shown good performance on relevant public datasets.
ISSN:1746-8094
DOI:10.1016/j.bspc.2024.107208