Item Diffficulty Modeling of Paragraph Comprehension Items

Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a pro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied psychological measurement 2006-09, Vol.30 (5), p.394-411
Hauptverfasser: Gorin, Joanna S., Embretson, Susan E.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more difficult to model than other item types due to the complexities of quantifying text. However, recent developments in artificial intelligence for text analysis permit quantitative indices to represent cognitive sources of difficulty. The current study attempts to identify generative components for the Graduate Record Examination paragraph comprehension items through the cognitive decomposition of item difficulty. Text comprehension and decision processes accounted for a significant amount of the variance in item difficulties. The decision model variables contributed significantly to variance in item difficulties, whereas the text representation variables did not. Implications for score interpretation and future possibilities for item generation are discussed. Index terms: difficulty modeling, construct validity, comprehension tests, item generation
ISSN:0146-6216
1552-3497
DOI:10.1177/0146621606288554