Knowledge Graph and Personalized Answer Sequences for Programming Knowledge Tracing
Knowledge tracing is a significant research area in educational data mining, aiming to predict future performance based on students’ historical learning data. In the field of programming, several challenges are faced in knowledge tracing, including inaccurate exercise representation and limited stud...
Gespeichert in:
Veröffentlicht in: | Applied sciences 2024-09, Vol.14 (17), p.7952 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge tracing is a significant research area in educational data mining, aiming to predict future performance based on students’ historical learning data. In the field of programming, several challenges are faced in knowledge tracing, including inaccurate exercise representation and limited student information. These issues can lead to biased models and inaccurate predictions of students’ knowledge states. To effectively address these issues, we propose a novel programming knowledge tracing model named GPPKT (Knowledge Graph and Personalized Answer Sequences for Programming Knowledge Tracing), which enhances performance by using knowledge graphs and personalized answer sequences. Specifically, we establish the associations between well-defined knowledge concepts and exercises, incorporating student learning abilities and latent representations generated from personalized answer sequences using Variational Autoencoders (VAE) in the model. This deep knowledge tracing model employs Long Short-Term Memory (LSTM) networks and attention mechanisms to integrate the embedding vectors, such as exercises and student information. Extensive experiments are conducted on two real-world programming datasets. The results indicate that GPPKT outperforms state-of-the-art methods, achieving an AUC of 0.8840 and an accuracy of 0.8472 on the Luogu dataset, and an AUC of 0.7770 and an accuracy of 0.8799 on the Codeforces dataset. This demonstrates the superiority of the proposed model, with an average improvement of 9.03% in AUC and 2.02% in accuracy across both datasets. |
---|---|
ISSN: | 2076-3417 2076-3417 |
DOI: | 10.3390/app14177952 |