Testing the Reliability of two Rubrics Used in Official English Certificates for the Assessment of Writing
The learning of English as a Foreign Language (EFL) is clearly a primary concern worldwide these days. This has spurred a proliferation of studies related to it and the emergence of new methodologies and instruments of assessment. Along with these, new qualifications devoted to the certification of...
Gespeichert in:
Veröffentlicht in: | Revista alicantina de estudios ingleses (Internet) 2022-01 (36), p.85 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The learning of English as a Foreign Language (EFL) is clearly a primary concern worldwide these days. This has spurred a proliferation of studies related to it and the emergence of new methodologies and instruments of assessment. Along with these, new qualifications devoted to the certification of language competence have been created, triggered in no small part by the fact that demonstrating one’s level of proficiency has become almost an imperative when applying for a job or a grant, or to enable someone to study in a foreign country. It is therefore essential to test the reliability of the instruments used for the assessment of competences. With this purpose, over a four-week period, four different evaluators have assessed the written essays of students on a C1 level course using the writing rubrics for Cambridge Assessment English’s Cambridge Advance English Certificate (CAE) and Trinity College’s Integrated Skills in English Exams III (ISE-III). The aim was to examine the CAE and the ISE-III rubrics’ reliability through the calculation of their respective Cronbach’s alpha, the Corrected-Item Total correlation, the Intra-class Correlation Coefficient and the Standard Error of Measurement. Afterwards, the results given to each essay on the basis of the two rubrics were compared so to ascertain whether their language is clear and which criteria tended to obtain higher and lower marks on average. Examiners were also surveyed at the end of the assessment process to find their opinion on the use of the two rubrics in terms of clarity. The research provided meaningful and interesting results such as the fact that although both rubrics obtained good results in the coefficients of reliability, the variance in scores is greater when using the ISE-III rubric and that examiners tend to be tougher when assessing the learner’s language resource than any other criterion. It is also worth pointing out that according to the survey, examiners’ general perception of both rubrics is that some of their descriptors were confusing or vague, which suggests both rubrics should be revised and could benefit from some improvement. |
---|---|
ISSN: | 0214-4808 2171-861X |
DOI: | 10.14198/raei.2022.36.05 |