Domain Knowledge Powered Deep Learning for Breast Cancer Diagnosis Based on Contrast-Enhanced Ultrasound Videos

In recent years, deep learning has been widely used in breast cancer diagnosis, and many high-performance models have emerged. However, most of the existing deep learning models are mainly based on static breast ultrasound (US) images. In actual diagnostic process, contrast-enhanced ultrasound (CEUS...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on medical imaging 2021-09, Vol.40 (9), p.2439-2451
Hauptverfasser: Chen, Chen, Wang, Yong, Niu, Jianwei, Liu, Xuefeng, Li, Qingfeng, Gong, Xuantong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In recent years, deep learning has been widely used in breast cancer diagnosis, and many high-performance models have emerged. However, most of the existing deep learning models are mainly based on static breast ultrasound (US) images. In actual diagnostic process, contrast-enhanced ultrasound (CEUS) is a commonly used technique by radiologists. Compared with static breast US images, CEUS videos can provide more detailed blood supply information of tumors, and therefore can help radiologists make a more accurate diagnosis. In this paper, we propose a novel diagnosis model based on CEUS videos. The backbone of the model is a 3D convolutional neural network. More specifically, we notice that radiologists generally follow two specific patterns when browsing CEUS videos. One pattern is that they focus on specific time slots, and the other is that they pay attention to the differences between the CEUS frames and the corresponding US images. To incorporate these two patterns into our deep learning model, we design a domain-knowledge-guided temporal attention module and a channel attention module. We validate our model on our Breast-CEUS dataset composed of 221 cases. The result shows that our model can achieve a sensitivity of 97.2% and an accuracy of 86.3%. In particular, the incorporation of domain knowledge leads to a 3.5% improvement in sensitivity and a 6.0% improvement in specificity. Finally, we also prove the validity of two domain knowledge modules in the 3D convolutional neural network (C3D) and the 3D ResNet (R3D).
ISSN:0278-0062
1558-254X
DOI:10.1109/TMI.2021.3078370