BGRD-TransUNet: A Novel TransUNet-Based Model for Ultrasound Breast Lesion Segmentation

Breast UltraSound (BUS) imaging is a commonly used diagnostic tool in the field of counter fighting breast diseases, especially for early detection and diagnosis of breast cancer. Due to the inherent characteristics of ultrasound images such as blurry boundaries and diverse tumor morphologies, it is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.31182-31196
Hauptverfasser: Ji, Zhanlin, Sun, Haoran, Yuan, Na, Zhang, Haiyang, Sheng, Jiaxi, Zhang, Xueji, Ganchev, Ivan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Breast UltraSound (BUS) imaging is a commonly used diagnostic tool in the field of counter fighting breast diseases, especially for early detection and diagnosis of breast cancer. Due to the inherent characteristics of ultrasound images such as blurry boundaries and diverse tumor morphologies, it is challenging for doctors to manually segment breast tumors. In recent years, the Convolutional Neural Network (CNN) technology has been widely applied to automatically segment BUS images. However, due to the inherent limitations of CNNs in capturing global contextual information, it is difficult to capture the full context. To address this issue, the paper proposes a novel BGRD-TransUNet model for breast lesion segmentation, based on TransUNet. The proposed model, first, replaces the original ResNet50 backbone network of TransUNet with DenseNet121 for initial feature extraction. Next, newly designed Residual Multi-Scale Feature Modules (RMSFMs) are employed to extract features from various layers of DenseNet121, thus capturing richer features within specific layers. Thirdly, a Boundary Guidance (BG) network is added to enhance the contour information of BUS images. Additionally, newly designed Boundary Attentional Feature Fusion Modules (BAFFMs) are used to integrate edge information and features extracted through RMSFMs. Finally, newly designed Parallel Channel and Spatial Attention Modules (PCSAMs) are used to refine feature extraction using channel and spatial attention. An extensive experimental testing performed on two public datasets demonstrates that the proposed BGRD-TransUNet model outperforms all state-of-the-art medical image segmentation models, participating in the experiments, according to all evaluation metrics used (except for few separate cases), including the two most important and widely used metrics in the field of medical image segmentation, namely the Intersection over Union (IoU) and Dice Similarity Coefficient (DSC). More specifically, on the BUSI dataset and dataset B, BGRD-TransUNet achieves IoU values of 76.77% and 86.61%, and DSC values of 85.08% and 92.47%, respectively, which are higher by 7.27 and 3.64, and 5.81 and 2.54 percentage points, than the corresponding values achieved by the baseline (TransUNet).
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3368170