An error-analysis study from an EFL writing context: Human and Automated Essay Scoring Approaches

Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Technology, knowledge and learning knowledge and learning, 2023-09, Vol.28 (3), p.1015-1031
Hauptverfasser: Almusharraf, Norah, Alotaibi, Hind
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches’ performances were analyzed quantitatively using Corder’s (1974) error analysis approach to categorize the writing errors in a corpus of 197 essays written by English as a foreign language (EFL) learners. Pearson correlation coefficient and paired sample t -tests were conducted to analyze and compare errors detected by both approaches. According to the study’s results, a moderate correlation between human raters and AES in terms of the total scores and the number of errors detected. Results also indicated that the total number of errors detected by AES is significantly higher than human raters and that the latter tend to give students higher scores. The findings encourage a more open attitude towards AES systems to support EFL writing teachers in assessing students’ work.
ISSN:2211-1662
2211-1670
DOI:10.1007/s10758-022-09592-z