Surgical planning of pelvic tumor using multi-view CNN with relation-context representation learning
•A novel CNN architecture exploring 2D and 3D information from multi-view MR scans.•A regularization module improved representation learning from limited data.•Efficient surgical planning for pelvic tumor resection and reconstruction.•Performance of the proposed method is comparable to inter-annotat...
Gespeichert in:
Veröffentlicht in: | Medical image analysis 2021-04, Vol.69, p.101954-101954, Article 101954 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •A novel CNN architecture exploring 2D and 3D information from multi-view MR scans.•A regularization module improved representation learning from limited data.•Efficient surgical planning for pelvic tumor resection and reconstruction.•Performance of the proposed method is comparable to inter-annotator agreement.•A phantom study and a real surgery demonstrated the success of the proposed workflow.
[Display omitted]
Limb salvage surgery of malignant pelvic tumors is the most challenging procedure in musculoskeletal oncology due to the complex anatomy of the pelvic bones and soft tissues. It is crucial to accurately resect the pelvic tumors with appropriate margins in this procedure. However, there is still a lack of efficient and repetitive image planning methods for tumor identification and segmentation in many hospitals. In this paper, we present a novel deep learning-based method to accurately segment pelvic bone tumors in MRI. Our method uses a multi-view fusion network to extract pseudo-3D information from two scans in different directions and improves the feature representation by learning a relational context. In this way, it can fully utilize spatial information in thick MRI scans and reduce over-fitting when learning from a small dataset. Our proposed method was evaluated on two independent datasets collected from 90 and 15 patients, respectively. The segmentation accuracy of our method was superior to several comparing methods and comparable to the expert annotation, while the average time consumed decreased about 100 times from 1820.3 seconds to 19.2 seconds. In addition, we incorporate our method into an efficient workflow to improve the surgical planning process. Our workflow took only 15 minutes to complete surgical planning in a phantom study, which is a dramatic acceleration compared with the 2-day time span in a traditional workflow. |
---|---|
ISSN: | 1361-8415 1361-8423 |
DOI: | 10.1016/j.media.2020.101954 |