The effect of automated feedback on revision behavior and learning gains in formative assessment of scientific argument writing

Application of new automated scoring technologies, such as natural language processing and machine learning, makes it possible to provide automated feedback on students' short written responses. Even though many studies investigated the automated feedback in the computer-mediated learning envir...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers and education 2020-01, Vol.143, p.103668, Article 103668
Hauptverfasser: Zhu, Mengxiao, Liu, Ou Lydia, Lee, Hee-Sun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Application of new automated scoring technologies, such as natural language processing and machine learning, makes it possible to provide automated feedback on students' short written responses. Even though many studies investigated the automated feedback in the computer-mediated learning environments, most of them focused on the multiple-choice items instead of the constructed response items. This study focuses on the latter and investigates a formative feedback system integrated into an online science curriculum module teaching climate change. The feedback system incorporates automated scoring technologies to support students' revision of scientific arguments. By analyzing the log files from the climate module, we explore how student revisions enabled by the formative feedback system correlate with student performance and learning gains. We also compare the impact of generic feedback (context-independent) vs. contextualized feedback (context-dependent). Our results showed that (1) students with higher initial scores on average were more likely to revise after the automated feedback, (2) revisions were positively related to score increases, and (3) contextualized feedback was more effective in assisting learning. The findings of this study provide insights into the use of automated feedback to improve scientific argumentation writing as part of classroom instruction. •Investigating how students reacted to the instant feedback on constructed response items on scientific argumentation skills.•Adopted feedback automatically generated using automated scoring technologies.•Comparing impacts of generic feedback (context-independent) and contextualized feedback (context-dependent).
ISSN:0360-1315
1873-782X
DOI:10.1016/j.compedu.2019.103668