Evaluation of an automated fetal myocardial performance index
Objective To compare automated measurements of the fetal left myocardial performance index (MPI) with manual measurements for absolute value, repeatability and waveform acceptability. Methods This was a multicenter international online study using images from uncomplicated, morphologically normal si...
Gespeichert in:
Veröffentlicht in: | Ultrasound in obstetrics & gynecology 2016-10, Vol.48 (4), p.496-503 |
---|---|
Hauptverfasser: | , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Objective
To compare automated measurements of the fetal left myocardial performance index (MPI) with manual measurements for absolute value, repeatability and waveform acceptability.
Methods
This was a multicenter international online study using images from uncomplicated, morphologically normal singleton pregnancies (16–38 weeks' gestation). Single Doppler ultrasound cardiac cycle images of 25 cases were selected, triplicated and randomized (n = 75). Six senior observers, unaware of the repetition of images, manually calculated MPI for each waveform and the results were compared with automation. Intraobserver repeatability and interobserver reproducibility were assessed using intraclass correlation coefficients (ICCs) and 95% CI. The agreement between each observer's manual MPI measurements and corresponding automated measurements was evaluated using Bland–Altman plots and ICCs with 95% CI. The degree of variation between experts in the classification of fetal MPI waveform quality was assessed using individual cardiac cycle left MPI images previously classified by two authors as ‘optimal’, ‘suboptimal’ or ‘unacceptable’, with 30 images selected for each quality group. Ten images in each category were duplicated and the resulting 120 images were randomized and then classified online by five observers. The kappa statistic (κ) was used to demonstrate interobserver and intraobserver agreement and agreement of classifications by the five observers.
Results
The automated measurement software returned the same value for any given image, resulting in an ICC of 1.00. Manual measurements had intraobserver repeatability ICC values ranging from 0.69 to 0.97, and the interobserver reproducibility ICC was 0.78. Comparison of automated vs manual MPI absolute measurements for each observer gave ICCs ranging from 0.77 to 0.96. Interobserver image quality classification agreement gave k = 0.69 (P |
---|---|
ISSN: | 0960-7692 1469-0705 |
DOI: | 10.1002/uog.15770 |