Multi-Task Multi-Scale Learning For Outcome Prediction in 3D PET Images
Background and Objectives: Predicting patient response to treatment and survival in oncology is a prominent way towards precision medicine. To that end, radiomics was proposed as a field of study where images are used instead of invasive methods. The first step in radiomic analysis is the segmentati...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background and Objectives: Predicting patient response to treatment and
survival in oncology is a prominent way towards precision medicine. To that
end, radiomics was proposed as a field of study where images are used instead
of invasive methods. The first step in radiomic analysis is the segmentation of
the lesion. However, this task is time consuming and can be physician
subjective. Automated tools based on supervised deep learning have made great
progress to assist physicians. However, they are data hungry, and annotated
data remains a major issue in the medical field where only a small subset of
annotated images is available. Methods: In this work, we propose a multi-task
learning framework to predict patient's survival and response. We show that the
encoder can leverage multiple tasks to extract meaningful and powerful features
that improve radiomics performance. We show also that subsidiary tasks serve as
an inductive bias so that the model can better generalize. Results: Our model
was tested and validated for treatment response and survival in lung and
esophageal cancers, with an area under the ROC curve of 77% and 71%
respectively, outperforming single task learning methods. Conclusions: We show
that, by using a multi-task learning approach, we can boost the performance of
radiomic analysis by extracting rich information of intratumoral and
peritumoral regions. |
---|---|
DOI: | 10.48550/arxiv.2203.00641 |