The influence of rater language background on writing performance assessment

Language performance assessments typically require human raters, introducing possible error. In international examinations of English proficiency, rater language background is an especially salient factor that needs to be considered. The existence of rater language background-related bias in writing...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Language testing 2009-10, Vol.26 (4), p.485-505
Hauptverfasser: Johnson, Jeff S., Lim, Gad S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Language performance assessments typically require human raters, introducing possible error. In international examinations of English proficiency, rater language background is an especially salient factor that needs to be considered. The existence of rater language background-related bias in writing performance assessment is the object of this study. Data for this study are ratings assigned by Michigan English Language Assessment Battery (MELAB) raters to compositions written by examinees of various language backgrounds. While most of the raters are native speakers of English, four have first languages other than English: two Spanish, one Korean, and one bilingual speaker of Filipino and Chinese (Amoy). Examinees were divided into 21 language groups. The IRT application FACETS was used to estimate and control for rater severity when calculating the amount of bias reflected by each rater’s set of ratings for each language/language group. Results show that the magnitude of bias terms for all raters for all language groups was minimal, thus having little effect on examinee scores, and that there is no pattern of language-related bias in the ratings.
ISSN:0265-5322
1477-0946
DOI:10.1177/0265532209340186