A multimodal breast cancer diagnosis method based on Knowledge-Augmented Deep Learning
Breast cancer is a worldwide medical challenge that requires Early diagnosis. While there are numerous diagnostic methods for breast cancer, many primarily focus on network structure, neglecting the guidance of professional medical knowledge. Moreover, these methods limit their analysis to 2-dimensi...
Gespeichert in:
Veröffentlicht in: | Biomedical signal processing and control 2024-04, Vol.90, p.105843, Article 105843 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Breast cancer is a worldwide medical challenge that requires Early diagnosis. While there are numerous diagnostic methods for breast cancer, many primarily focus on network structure, neglecting the guidance of professional medical knowledge. Moreover, these methods limit their analysis to 2-dimensional B-mode ultrasound images and rarely consider the potential insights provided by Contrast-Enhanced Ultrasound (CEUS) videos, which offer more detailed dynamic pathological information. Therefore, how to effectively utilize prior medical knowledge to achieve a precise diagnosis of breast cancer based on CEUS videos has emerged as a pressing issue. To address this challenge, we propose a multimodal breast cancer diagnostic method based on Knowledge-Augmented Deep Learning named KAMnet. This method integrates three types of prior knowledge into deep neural networks through different integration strategies. First, we devise a temporal segment selection strategy guided by Gaussian sampling through data-level integration, guiding the model to focus on keyframes. Second, we construct a feature fusion network for architecture-level integration and achieve collaborative inference through decision-level integration, facilitating multimodal information exchange. Finally, a spatial attention-guided loss function through training-level integration helps the model target lesion regions. We validate our model on our breast cancer video dataset consisting of 332 cases. The result shows that our model can achieve a sensitivity of 90.91% and an accuracy of 88.238%. Extensive ablation experiments demonstrate the effectiveness of our knowledge enhancement modules. The code is released at https://github.com/tobenan/BCCAD_torch.
[Display omitted]
•Few cancer diagnostic studies explore CEUS videos that contain rich dynamic info.•Proposing & demonstrating a specific process of Knowledge-Augmented Deep Learning.•Temporal knowledge integration: Gaussian sampling for key frames.•Spatial knowledge integration: spatial attention-guided loss function.•Empirical knowledge integration: feature fusion network for joint inference. |
---|---|
ISSN: | 1746-8094 1746-8108 |
DOI: | 10.1016/j.bspc.2023.105843 |