Intra- and Interobserver Agreement on Radiographic Phenotype in the Diagnosis of Canine Hip Dysplasia

Objective To investigate the repeatability and reproducibility of the presence of a circumferential femoral head osteophyte (CFHO), a curvilinear caudolateral osteophyte (CCO), osteosclerosis of the cranial acetabular edge (Scler CrAE), degenerative joint disease (DJD), and the diagnosis of suspecte...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Veterinary surgery 2015-05, Vol.44 (4), p.467-473
Hauptverfasser: Fortrie, Ruth R., Verhoeven, Geert, Broeckx, Bart, Duchateau, Luc, Janssens, Luc, Samoy, Yves, Schreurs, Elke, Saunders, Jimmy, van Bree, Henri, Vandekerckhove, Peter, Coopman, Frank
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Objective To investigate the repeatability and reproducibility of the presence of a circumferential femoral head osteophyte (CFHO), a curvilinear caudolateral osteophyte (CCO), osteosclerosis of the cranial acetabular edge (Scler CrAE), degenerative joint disease (DJD), and the diagnosis of suspected canine hip dysplasia (CHD) in different groups of experienced observers. Study Design Cross‐sectional study. Sample Population Standard hip extended radiographs (n = 50). Methods Nine experienced observers were divided into 3 groups: surgeons (DECVS), radiologists (DECVDI), and non‐board certified observers (NBC) and 2 subgroups (academics and non‐academics). Cohen's kappa (κ) was calculated for CFHO, CCO, Scler CrAE, DJD, and suspected CHD, and weighted κ was calculated for DJD score to determine inter‐ and intraobserver agreement. Results Intraobserver agreement on CFHO, CCO, Scler CrAE, DJD, and suspected CHD ranged from slight to almost perfect, but was not significantly different between NBC, DECVS, and DECVDI. Radiologists and non‐board certified observers had a more uniform scoring than surgeons on the overall DJD score, as did academics versus non‐academics. Interobserver agreement for NBC was more uniform than that of radiologists and surgeons on CCO and DJD. NBC and radiologists scored more uniformly than surgeons on CFHO, and radiologists scored more uniformly than NBC and surgeons on Scler CrAE. Academics scored more uniformly than non‐academics, but only significantly for Scler CrAE. Conclusions Recognition of specific radiographic markers is only fairly reliable within and between experienced observers. Therefore, care must be taken to apply these traits in official screening, surgical decision‐making and scientific research.
ISSN:0161-3499
1532-950X
DOI:10.1111/j.1532-950X.2014.12309.x