Breast tumor segmentation in ultrasound images using contextual-information-aware deep adversarial learning framework

Automatic tumor segmentation in breast ultrasound (BUS) images is still a challenging task because of many sources of uncertainty, such as speckle noise, very low signal-to-noise ratio, shadows that make the anatomical boundaries of tumors ambiguous, as well as the highly variable tumor sizes and sh...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2020-12, Vol.162, p.113870, Article 113870
Hauptverfasser: Singh, Vivek Kumar, Abdel-Nasser, Mohamed, Akram, Farhan, Rashwan, Hatem A., Sarker, Md. Mostafa Kamal, Pandey, Nidhi, Romani, Santiago, Puig, Domenec
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Automatic tumor segmentation in breast ultrasound (BUS) images is still a challenging task because of many sources of uncertainty, such as speckle noise, very low signal-to-noise ratio, shadows that make the anatomical boundaries of tumors ambiguous, as well as the highly variable tumor sizes and shapes. This article proposes an efficient automated method for tumor segmentation in BUS images based on a contextual information-aware conditional generative adversarial learning framework. Specifically, we exploit several enhancements on a deep adversarial learning framework to capture both texture features and contextual dependencies in the BUS images that facilitate beating the challenges mentioned above. First, we adopt atrous convolution (AC) to capture spatial and scale context (i.e., position and size of tumors) to handle very different tumor sizes and shapes. Second, we propose the use of channel attention along with channel weighting (CAW) mechanisms to promote the tumor-relevant features (without extra supervision) and mitigate the effects of artifacts. Third, we propose to integrate the structural similarity index metric (SSIM) and L1-norm in the loss function of the adversarial learning framework to capture the local context information derived from the area surrounding the tumors. We used two BUS image datasets to assess the efficiency of the proposed model. The experimental results show that the proposed model achieves competitive results compared with state-of-the-art segmentation models in terms of Dice and IoU metrics. The source code of the proposed model is publicly available at https://github.com/vivek231/Breast-US-project. •Efficient automated method for tumor segmentation in breast ultrasound (BUS) images.•Contextual information-aware conditional generative adversarial learning framework.•Capture spatial and scale context to handle very different tumor sizes and shapes.•Two BUS image datasets are used to assess the efficiency of the proposed model.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2020.113870