Multi-region radiomics for artificially intelligent diagnosis of breast cancer using multimodal ultrasound

The ultrasound (US) diagnosis of breast cancer is usually based on a single-region of a whole breast tumor from a single ultrasonic modality, which limits the diagnostic performance. Multiple regions on multimodal US images of breast tumors may all have useful information for diagnosis. This study a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers in biology and medicine 2022-10, Vol.149, p.105920-105920, Article 105920
Hauptverfasser: Xu, Zhou, Wang, Yuqun, Chen, Man, Zhang, Qi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The ultrasound (US) diagnosis of breast cancer is usually based on a single-region of a whole breast tumor from a single ultrasonic modality, which limits the diagnostic performance. Multiple regions on multimodal US images of breast tumors may all have useful information for diagnosis. This study aimed to propose a multi-region radiomics approach with multimodal US for artificially intelligent diagnosis of malignant and benign breast tumors. Firstly, radiomics features were extracted from five regions of interest (ROIs) on B-mode US and contrast-enhanced ultrasound (CEUS) images, including intensity statistics, gray-level co-occurrence matrix texture features and binary texture features. The multiple ROIs included the whole tumor region, strongest perfusion region, marginal region and surrounding region. Secondly, a deep neural network, composed of the point-wise gated Boltzmann machine and the restricted Boltzmann machine, was adopted to comprehensively learn and select features. Thirdly, the support vector machine was used for classification between benign and malignant breast tumors. Finally, five single-region classification models were generated from five ROIs, and they were fused to form an integrated classification model. Experimental evaluation was conducted on multimodal US images of breast from 187 patients with breast tumors (68 malignant and 119 benign). Under five-fold cross-validation, the classification accuracy, sensitivity, specificity, Youden's index and area under the receiver operating characteristic curve (AUC) with our model were 87.1% ± 3.3%, 77.4% ± 11.8%, 92.4% ± 7.2%, 69.8% ± 8.6% and 0.849 ± 0.043, respectively. Our model was significantly better than single-region single-modal methods in terms of the AUC and accuracy (p 
ISSN:0010-4825
1879-0534
DOI:10.1016/j.compbiomed.2022.105920