Improving Knowledge Tracing Via Considering Two Types of Actual Differences From Exercises and Prior Knowledge
For offering adaptive learning to learners in intelligent tutoring systems, one of the fundamental tasks is knowledge tracing (KT), which aims to assess learners' learning states and make prediction for future performance. However, there are two crucial issues in deep learning-based KT models....
Gespeichert in:
Veröffentlicht in: | IEEE Transactions on Learning Technologies 2023-06, Vol.16 (3), p.1-16 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | For offering adaptive learning to learners in intelligent tutoring systems, one of the fundamental tasks is knowledge tracing (KT), which aims to assess learners' learning states and make prediction for future performance. However, there are two crucial issues in deep learning-based KT models. First, the knowledge concepts are used to predict learners' performance instead of exercises. This choice ignores the actual difference among exercises, which leads to the KT model's inability to explore the rich information at the exercise level, thus weakening the KT model's performance. Second, the actual difference in prior knowledge is neglected. Therefore, in-depth research of these two types of the actual differences offers the possibility to improve the prediction performance of KT further. To this end, we propose a Fine-Grained Knowledge Tracing model (FGKT){}^{1} to capture these two types of actual differences. We first obtain the exercise representations by considering the corresponding knowledge concepts and actual difference among exercises. Then, an effective attention mechanism is designed to capture the relevance between assessment exercises and historical interactions to acquire individual prior knowledge. Finally, we evaluate the proposed FGKT model on several available benchmark datasets. The experiment results show that our model surpasses benchmark KT models by up to 7% in prediction performance on the latest ASSISTments dataset. |
---|---|
ISSN: | 1939-1382 2372-0050 |
DOI: | 10.1109/TLT.2023.3259013 |