Evaluating Sentence‐BERT‐powered learning analytics for automated assessment of students' causal diagrams
Background When learning causal relations, completing causal diagrams enhances students' comprehension judgements to some extent. To potentially boost this effect, advances in natural language processing (NLP) enable real‐time formative feedback based on the automated assessment of students...
Gespeichert in:
Veröffentlicht in: | Journal of computer assisted learning 2024-12, Vol.40 (6), p.2667-2680 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Background
When learning causal relations, completing causal diagrams enhances students' comprehension judgements to some extent. To potentially boost this effect, advances in natural language processing (NLP) enable real‐time formative feedback based on the automated assessment of students' diagrams, which can involve the correctness of both the responses and their position in the causal chain. However, the responsible adoption and effectiveness of automated diagram assessment depend on its reliability.
Objectives
In this study, we compare two Dutch pre‐trained models (i.e., based on RobBERT and BERTje) in combination with two machine‐learning classifiers—Support Vector Machine (SVM) and Neural Networks (NN), in terms of different indicators of automated diagram assessment reliability. We also contrast two techniques (i.e., semantic similarity and machine learning) for estimating the correct position of a student diagram response in the causal chain.
Methods
For training and evaluation of the models, we capitalize on a human‐labelled dataset containing 2900+ causal diagrams completed by 700+ secondary school students, accumulated from previous diagramming experiments.
Results and Conclusions
In predicting correct responses, 86% accuracy and Cohen's κ of 0.69 were reached, with combinations using SVM being roughly three‐times faster (important for real‐time applications) than their NN counterparts. In terms of predicting the response position in the causal diagrams, 92% accuracy and 0.89 Cohen's κ were reached.
Implications
Taken together, these evaluation figures equip educational designers for decision‐making on when these NLP‐powered learning analytics are warranted for automated formative feedback in causal relation learning; thereby potentially enabling real‐time feedback for learners and reducing teachers' workload.
Lay Description
What is currently known about this topic?
Students' monitoring accuracy of causal relation learning is on average low.
Completing causal diagrams improves monitoring accuracy to some extent.
Advances in natural language processing (NLP) enable automated diagram assessment.
NLP‐powered learning analytics can be used for automated formative feedback.
What does this paper add?
Evaluation of the reliability of the automated diagram assessment.
Performance comparison of different language technologies and techniques.
The accuracy of the automated diagram assessment ranged from 84% to 86%.
Human‐computer Cohen's κ surpassed that |
---|---|
ISSN: | 0266-4909 1365-2729 |
DOI: | 10.1111/jcal.12992 |