Validity arguments for diagnostic assessment using automated writing evaluation

Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). Criterion®, was developed by Educational Testing Service to analyze students’ papers grammatically, providing sentence-level error feedback. An interpretive argument w...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Language testing 2015-07, Vol.32 (3), p.385-405
Hauptverfasser: Chapelle, Carol A., Cotos, Elena, Lee, Jooyoung
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). Criterion®, was developed by Educational Testing Service to analyze students’ papers grammatically, providing sentence-level error feedback. An interpretive argument was developed for its use as part of the diagnostic assessment process in undergraduate university English for academic purposes (EAP) classes. The Intelligent Academic Discourse Evaluator (IADE) was developed for use in graduate EAP university classes, where the goal was to help students improve their discipline-specific writing. The validation for each was designed to support claims about the intended purposes of the assessments. We present the interpretive argument for each and show some of the data that have been gathered as backing for the respective validity arguments, which include the range of inferences that one would make in claiming validity of the interpretations, uses, and consequences of diagnostic AWE-based assessments.
ISSN:0265-5322
1477-0946
DOI:10.1177/0265532214565386