Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-Net

•Automatic segmentation and 3D visualization of CT images by optimizing feature extraction and target space clustering.•Similarity (DSC), recall (R), precision (P) and three-dimensional vertex distance error (VDE) are combined to evaluate segmentation performance and 3D reconstruction effect.•The DC...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of bone oncology 2025-02, Vol.50, p.100654, Article 100654
Hauptverfasser: Wang, Kun, Han, Yong, Ye, Yuguang, Chen, Yusi, Zhu, Daxin, Huang, Yifeng, Huang, Ying, Chen, Yijie, Shi, Jianshe, Ding, Bijiao, Huang, Jianlong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Automatic segmentation and 3D visualization of CT images by optimizing feature extraction and target space clustering.•Similarity (DSC), recall (R), precision (P) and three-dimensional vertex distance error (VDE) are combined to evaluate segmentation performance and 3D reconstruction effect.•The DCU-Net operation model has significant advantages compared with models such as U-Net and Attention-Uet.•A mixed-reality three-dimensional visualization infrastructure was constructed.•The facility enhances clinicians’ understanding of tumor morphology and spatial relationships, which in turn is expected to facilitate bone tumor clinical practice and improve outcomes. Segmenting and reconstructing 3D models of bone tumors from 2D image data is of great significance for assisting disease diagnosis and treatment. However, due to the low distinguishability of tumors and surrounding tissues in images, existing methods lack accuracy and stability. This study proposes a U-Net model based on double dimensionality reduction and channel attention gating mechanism, namely the DCU-Net model for oncological image segmentation. After realizing automatic segmentation and 3D reconstruction of osteosarcoma by optimizing feature extraction and improving target space clustering capabilities, we built a mixed reality (MR) infrastructure and explored the application prospects of the infrastructure combining deep learning-based medical image segmentation and mixed reality in the diagnosis and treatment of bone tumors. We conducted experiments using a hospital dataset for bone tumor segmentation, used the optimized DCU-Net and 3D reconstruction technology to generate bone tumor models, and used set similarity (DSC), recall (R), precision (P), and 3D vertex distance error (VDE) to evaluate segmentation performance and 3D reconstruction effects. Then, two surgeons conducted clinical examination experiments on patients using two different methods, viewing 2D images and virtual reality infrastructure, and used the Likert scale (LS) to compare the effectiveness of surgical plans of the two methods. The DSC, R and P values of the model introduced in this paper all exceed 90%, which has significant advantages compared with methods such as U-Net and Attention-Uet. Furthermore, LS showed that clinicians in the DCU-Net-based MR group had better spatial awareness of tumor preoperative planning. The deep learning DCU-Net algorithm model can improve the performance of tumor CT image segmentation, a
ISSN:2212-1374
2212-1366
2212-1374
DOI:10.1016/j.jbo.2024.100654