Reproducible Evaluation of Open-Source Tools for Prostate Segmentation on Public Datasets
Segmentation of the prostate and surrounding regions is important for a variety of clinical and research applications. Our goal is to evaluate the generalizability of publicly available state-of-the-art AI models on publicly available datasets. To compare the AI generated segmentations to the availa...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Dataset |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Segmentation of the prostate and surrounding regions is important for a variety of clinical and research applications. Our goal is to evaluate the generalizability of publicly available state-of-the-art AI models on publicly available datasets. To compare the AI generated segmentations to the available manually annotated ground-truth, quantitative measures such as Dice Coefficient and Hausdorff distance, along with shape radiomics features, were analyzed. Our study also aims to show how cloud-based tools can be used to analyze, store, and visualize evaluation results.
Three open-source pre-trained AI prostate segmentation tools were evaluated against expert annotations, on three publicly available MRI prostate collections, available in NCI Imaging Data Commons[1]. Two pre-trained models originate from the nnU-Net framework[2], the last pre-trained model originates from Prostate158 paper[4]. ProstateX[5], QIN-Prostate-Repeatability[6] and PROSTATE-MRI-US-Biopsy[7]. Expert annotations of the the whole prostate gland, peripheral zone (PZ) and transition zone (TZ) of the prostate are available for ProstateX collection, whole prostate gland and PZ for QIN-Prostate-Repeatability collection, and whole prostate gland for PROSTATE-MRI-US-Biopsy collection.
We rely on the DICOM standard to encode our segmentation and radiomics results. The DICOM standard aims to achieve interoperability and FAIR[10] principles. Encoding our results in DICOM representation allows us to leverage DICOM-reliant tools, such as Google Cloud Computing tools for storage,computation, analysis and visualization. Open-source DICOM-based visualization tools such as OHIF[8] viewer can also be used to look qualitatively at the AI and expert annotations and the referenced images.. DICOM Segmentation objects are used to encode the AI models predictions, using dcmqi[11], DICOM Structured Reports on the other hand are used to encode radiomics features[3] extracted from the AI and expert annotations, using dcmqi and highdicom[12].
This dataset is organized in three parts:
AI_SEGMENTATIONS_DICOM.zip, AI_STRUCTURED_REPORTS_DICOM.zip and EXPERT_SRUCTURED_REPORTS_DICOM..zip. All zip files contain DICOM objects only, sorted based on DICOM attributes, following this pattern:
PatientID/ └───Modality-%StudyInstanceUID/ └───%SeriesInstanceUID-%SeriesDescription.dcm.
AI_SEGMENTATIONS_DICOM.zip contains all the pre-trained AI models evaluated segmentation results, encoded as DICOM Segmentation |
---|---|
DOI: | 10.5281/zenodo.11620860 |