Revisiting the Effect of Varying the Number of Response Alternatives in Clinical Assessment: Evidence From Measuring ADHD Symptoms

This study illustrated the effect of varying the number of response alternatives in clinical assessment using a within-participant, repeated-measures approach. Participants reported the presence of current attention-deficit/hyperactivity disorder symptoms using both a binary and a polytomous (4-poin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Assessment (Odessa, Fla.) Fla.), 2021-07, Vol.28 (5), p.1287-1300
Hauptverfasser: Shi, Dexin, Siceloff, E. Rebekah, Castellanos, Rebeca E., Bridges, Rachel M., Jiang, Zhehan, Flory, Kate, Benson, Kari
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This study illustrated the effect of varying the number of response alternatives in clinical assessment using a within-participant, repeated-measures approach. Participants reported the presence of current attention-deficit/hyperactivity disorder symptoms using both a binary and a polytomous (4-point) rating scale across two counterbalanced administrations of the Current Symptoms Scale (CSS). Psychometric properties of the CSS were examined using (a) self-reported binary, (b) self-reported 4-point ratings obtained from each administration of the CSS, and (c) artificially dichotomized responses derived from observed 4-point ratings. Under the same ordinal factor analysis model, results indicated that the number of response alternatives affected item parameter estimates, standard errors, goodness of fit indices, individuals’ test scores, and reliability of the test scores. With fewer response alternatives, the precision of the measurement decreased, and the power of using the goodness-of-fit indices to detect model misfit decreased. These findings add to recent research advocating for the inclusion of a large number of response alternatives in the development of clinical assessments and further suggest that researchers should be cautious about reducing the number of response categories in data analysis.
ISSN:1073-1911
1552-3489
DOI:10.1177/1073191120952885