AI‐Based Adaptive Feedback in Simulations for Teacher Education: An Experimental Replication in the Field

ABSTRACT Background Artificial intelligence, particularly natural language processing (NLP), enables automating the formative assessment of written task solutions to provide adaptive feedback automatically. A laboratory study found that, compared with static feedback (an expert solution), adaptive f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of computer assisted learning 2025-02, Vol.41 (1), p.n/a
Hauptverfasser: Bauer, Elisabeth, Sailer, Michael, Niklas, Frank, Greiff, Samuel, Sarbu‐Rothsching, Sven, Zottmann, Jan M., Kiesewetter, Jan, Stadler, Matthias, Fischer, Martin R., Seidel, Tina, Urhahne, Detlef, Sailer, Maximilian, Fischer, Frank
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:ABSTRACT Background Artificial intelligence, particularly natural language processing (NLP), enables automating the formative assessment of written task solutions to provide adaptive feedback automatically. A laboratory study found that, compared with static feedback (an expert solution), adaptive feedback automated through artificial neural networks enhanced preservice teachers' diagnostic reasoning in a digital case‐based simulation. However, the effectiveness of the simulation with the different feedback types and the generalizability to field settings remained unclear. Objectives We tested the generalizability of the previous findings and the effectiveness of a single simulation session with either feedback type in an experimental field study. Methods In regular online courses, 332 preservice teachers at five German universities participated in one of three randomly assigned groups: (1) a simulation group with NLP‐based adaptive feedback, (2) a simulation group with static feedback and (3) a no‐simulation control group. We analysed the effect of the simulation with the two feedback types on participants' judgement accuracy and justification quality. Results and Conclusions Compared with static feedback, adaptive feedback significantly enhanced justification quality but not judgement accuracy. Only the simulation with adaptive feedback significantly benefited learners' justification quality over the no‐simulation control group, while no significant differences in judgement accuracy were found. Our field experiment replicated the findings of the laboratory study. Only a simulation session with adaptive feedback, unlike static feedback, seems to enhance learners' justification quality but not judgement accuracy. Under field conditions, learners require adaptive support in simulations and can benefit from NLP‐based adaptive feedback using artificial neural networks.
ISSN:0266-4909
1365-2729
DOI:10.1111/jcal.13123