Development of an algorithm for evaluating the impact of measurement variability on response categorization in oncology trials
Radiologic assessments of baseline and post-treatment tumor burden are subject to measurement variability, but the impact of this variability on the objective response rate (ORR) and progression rate in specific trials has been unpredictable on a practical level. In this study, we aimed to develop a...
Gespeichert in:
Veröffentlicht in: | BMC medical research methodology 2019-05, Vol.19 (1), p.90-90, Article 90 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Radiologic assessments of baseline and post-treatment tumor burden are subject to measurement variability, but the impact of this variability on the objective response rate (ORR) and progression rate in specific trials has been unpredictable on a practical level. In this study, we aimed to develop an algorithm for evaluating the quantitative impact of measurement variability on the ORR and progression rate.
First, we devised a hierarchical model for estimating the distribution of measurement variability using a clinical trial dataset of computed tomography scans. Next, a simulation method was used to calculate the probability representing the effect of measurement errors on categorical diagnoses in various scenarios using the estimated distribution. Based on the probabilities derived from the simulation, we developed an algorithm to evaluate the reliability of an ORR (or progression rate) (i.e., the variation in the assessed rate) by generating a 95% central range of ORR (or progression rate) results if a reassessment was performed. Finally, we performed validation using an external dataset. In the validation of the estimated distribution of measurement variability, the coverage level was calculated as the proportion of the 95% central ranges of hypothetical second readings that covered the actual burden sizes. In the validation of the evaluation algorithm, for 100 resampled datasets, the coverage level was calculated as the proportion of the 95% central ranges of ORR results that covered the ORR from a real second assessment.
We built a web tool for implementing the algorithm (publicly available at http://studyanalysis2017.pythonanywhere.com/ ). In the validation of the estimated distribution and the algorithm, the coverage levels were 93 and 100%, respectively.
The validation exercise using an external dataset demonstrated the adequacy of the statistical model and the utility of the developed algorithm. Quantification of variation in the ORR and progression rate due to potential measurement variability is essential and will help inform decisions made on the basis of trial data. |
---|---|
ISSN: | 1471-2288 1471-2288 |
DOI: | 10.1186/s12874-019-0727-7 |