Boundary-rendering network for breast lesion segmentation in ultrasound images

•A specialized segmentation model that can address blurry or occluded edges in ultrasound images.•A differentiable boundary selection module that can automatically focus on the marginal area.•A GCN-based boundary rendering module that can incorporate global contour information.•A unified framework t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Medical image analysis 2022-08, Vol.80, p.102478-102478, Article 102478
Hauptverfasser: Huang, Ruobing, Lin, Mingrong, Dou, Haoran, Lin, Zehui, Ying, Qilong, Jia, Xiaohong, Xu, Wenwen, Mei, Zihan, Yang, Xin, Dong, Yijie, Zhou, Jianqiao, Ni, Dong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A specialized segmentation model that can address blurry or occluded edges in ultrasound images.•A differentiable boundary selection module that can automatically focus on the marginal area.•A GCN-based boundary rendering module that can incorporate global contour information.•A unified framework that can perform segmentation and classification simultaneously. Breast Ultrasound (BUS) has proven to be an effective tool for the early detection of cancer in the breast. A lesion segmentation provides identification of the boundary, shape, and location of the target, and serves as a crucial step toward accurate diagnosis. Despite recent efforts in developing machine learning algorithms to automate this process, problems remain due to the blurry or occluded edges and highly irregular nodule shapes. Existing methods often produce over-smooth or inaccurate results, failing the need of identifying detailed boundary structures which are of clinical interest. To overcome these challenges, we propose a novel boundary-rendering framework that explicitly highlights the importance of boundary for automated nodule segmentation in BUS images. It utilizes a boundary selection module to automatically focuses on the ambiguous boundary region and a graph convolutional-based boundary rendering module to exploit global contour information. Furthermore, the proposed framework embeds nodule classification via semantic segmentation and encourages co-learning across tasks. Validation experiments were performed on different BUS datasets to verify the robustness of the proposed method. Results show that the proposed method outperforms states-of-art segmentation approaches (Dice=0.854, IOU=0.919, HD=17.8) in nodule delineation, as well as obtains a higher classification accuracy than classical classification models. [Display omitted]
ISSN:1361-8415
1361-8423
DOI:10.1016/j.media.2022.102478