Interplatform Reproducibility of CT Coronary Calcium Scoring Software
To investigate whether coronary artery calcium (CAC) scoring performed on three different workstations generates comparable and thus vendor-independent results. Institutional review board and Federal Office for Radiation Protection approval were received, as was each patient's written informed...
Gespeichert in:
Veröffentlicht in: | Radiology 2012-10, Vol.265 (1), p.70-77 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To investigate whether coronary artery calcium (CAC) scoring performed on three different workstations generates comparable and thus vendor-independent results.
Institutional review board and Federal Office for Radiation Protection approval were received, as was each patient's written informed consent. Fifty-nine patients (37 men, 22 women; mean age, 57 years±3 [standard deviation]) underwent CAC scoring with use of 64-section multidetector computed tomography (CT) with retrospective electrocardiographic gating (one examination per patient). Data sets were created at 10% increments of the R-R interval from 40%-80%. Two experienced observers in consensus calculated Agatston and volume scores for all data sets by using the calcium scoring software of three different workstations. Comparative analysis of CAC scores between the workstations was performed by using regression analysis, Spearman rank correlation (rs), and the Kruskal-Wallis test.
Each workstation produced different absolute numeric results for Agatston and volume scores. However, statistical analysis revealed excellent correlation between the workstations, with highest correlation at 60% of the R-R interval (minimal rs=0.998; maximal rs=0.999) for both scoring methods. No significant differences were detected for Agatston and volume score results between the software platforms. At analysis of individual reconstruction intervals, each workstation demonstrated the same score variability, with the consequence that 12 of 59 patients were assigned to divergent cardiac risk groups by using at least one of the workstations.
While mere numeric values might be different, commercially available software platforms produce comparable CAC scoring results, which suggests a vendor-independence of the method; however, none of the analyzed software platforms appears to provide a distinct advantage for risk stratification, as the variability of CAC scores depending on the reconstruction interval persists across platforms. |
---|---|
ISSN: | 0033-8419 1527-1315 |
DOI: | 10.1148/radiol.12112532 |