Test-Retest Reliability and Interpretation of Common Concussion Assessment Tools: Findings from the NCAA-DoD CARE Consortium

Background Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. Objective To evaluate th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sports medicine (Auckland) 2018-05, Vol.48 (5), p.1255-1268
Hauptverfasser: Broglio, Steven P., Katz, Barry P., Zhao, Shi, McCrea, Michael, McAllister, Thomas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Background Concussion diagnosis is typically made through clinical examination and supported by performance on clinical assessment tools. Performance on commonly implemented and emerging assessment tools is known to vary between administrations, in the absence of concussion. Objective To evaluate the test-retest reliability of commonly implemented and emerging concussion assessment tools across a large nationally representative sample of student-athletes. Methods Participants ( n  = 4874) from the Concussion Assessment, Research, and Education Consortium completed annual baseline assessments on two or three occasions. Each assessment included measures of self-reported concussion symptoms, motor control, brief and extended neurocognitive function, reaction time, oculomotor/oculovestibular function, and quality of life. Consistency between years 1 and 2 and 1 and 3 were estimated using intraclass correlation coefficients or Kappa and effect sizes (Cohen’s d ). Clinical interpretation guidelines were also generated using confidence intervals to account for non-normally distributed data. Results Reliability for the self-reported concussion symptoms, motor control, and brief and extended neurocognitive assessments from year 1 to 2 ranged from 0.30 to 0.72 while effect sizes ranged from 0.01 to 0.28 (i.e., small). The reliability for these same measures ranged from 0.34 to 0.66 for the year 1–3 interval with effect sizes ranging from 0.05 to 0.42 (i.e., small to less than medium). The year 1–2 reliability for the reaction time, oculomotor/oculovestibular function, and quality-of-life measures ranged from 0.28 to 0.74 with effect sizes from 0.01 to 0.38 (i.e., small to less than medium effects). Conclusions This investigation noted less than optimal reliability for most common and emerging concussion assessment tools. Despite this finding, their use is still necessitated by the absence of a gold standard diagnostic measure, with the ultimate goal of developing more refined and sound tools for clinical use. Clinical interpretation guidelines are provided for the clinician to apply with a degree of certainty in application.
ISSN:0112-1642
1179-2035
DOI:10.1007/s40279-017-0813-0