The ISMRM Open Science Initiative for Perfusion Imaging (OSIPI): Results from the OSIPI-Dynamic Contrast-Enhanced challenge
has often been proposed as a quantitative imaging biomarker for diagnosis, prognosis, and treatment response assessment for various tumors. None of the many software tools for quantification are standardized. The ISMRM Open Science Initiative for Perfusion Imaging-Dynamic Contrast-Enhanced (OSIPI-DC...
Gespeichert in:
Veröffentlicht in: | Magnetic resonance in medicine 2024-05, Vol.91 (5), p.1803-1821 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | has often been proposed as a quantitative imaging biomarker for diagnosis, prognosis, and treatment response assessment for various tumors. None of the many software tools for
quantification are standardized. The ISMRM Open Science Initiative for Perfusion Imaging-Dynamic Contrast-Enhanced (OSIPI-DCE) challenge was designed to benchmark methods to better help the efforts to standardize
measurement.
A framework was created to evaluate
values produced by DCE-MRI analysis pipelines to enable benchmarking. The perfusion MRI community was invited to apply their pipelines for
quantification in glioblastoma from clinical and synthetic patients. Submissions were required to include the entrants'
values, the applied software, and a standard operating procedure. These were evaluated using the proposed
score defined with accuracy, repeatability, and reproducibility components.
Across the 10 received submissions, the
score ranged from 28% to 78% with a 59% median. The accuracy, repeatability, and reproducibility scores ranged from 0.54 to 0.92, 0.64 to 0.86, and 0.65 to 1.00, respectively (0-1 = lowest-highest). Manual arterial input function selection markedly affected the reproducibility and showed greater variability in
analysis than automated methods. Furthermore, provision of a detailed standard operating procedure was critical for higher reproducibility.
This study reports results from the OSIPI-DCE challenge and highlights the high inter-software variability within
estimation, providing a framework for ongoing benchmarking against the scores presented. Through this challenge, the participating teams were ranked based on the performance of their software tools in the particular setting of this challenge. In a real-world clinical setting, many of these tools may perform differently with different benchmarking methodology. |
---|---|
ISSN: | 0740-3194 1522-2594 |
DOI: | 10.1002/mrm.29909 |