Multimodal Learning for Non-small Cell Lung Cancer Prognosis
This paper focuses on the task of survival time analysis for lung cancer. Although much progress has been made in this problem in recent years, the performance of existing methods is still far from satisfactory. Traditional and some deep learning-based survival time analyses for lung cancer are most...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper focuses on the task of survival time analysis for lung cancer.
Although much progress has been made in this problem in recent years, the
performance of existing methods is still far from satisfactory. Traditional and
some deep learning-based survival time analyses for lung cancer are mostly
based on textual clinical information such as staging, age, histology, etc.
Unlike existing methods that predicting on the single modality, we observe that
a human clinician usually takes multimodal data such as text clinical data and
visual scans to estimate survival time. Motivated by this, in this work, we
contribute a smart cross-modality network for survival analysis network named
Lite-ProSENet that simulates a human's manner of decision making. Extensive
experiments were conducted using data from 422 NSCLC patients from The Cancer
Imaging Archive (TCIA). The results show that our Lite-ProSENet outperforms
favorably again all comparison methods and achieves the new state of the art
with the 89.3% on concordance. The code will be made publicly available. |
---|---|
DOI: | 10.48550/arxiv.2211.03280 |