Language Model Can Do Knowledge Tracing: Simple but Effective Method to Integrate Language Model and Knowledge Tracing Task
Knowledge Tracing (KT) is a critical task in online learning for modeling student knowledge over time. Despite the success of deep learning-based KT models, which rely on sequences of numbers as data, most existing approaches fail to leverage the rich semantic information in the text of questions an...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge Tracing (KT) is a critical task in online learning for modeling
student knowledge over time. Despite the success of deep learning-based KT
models, which rely on sequences of numbers as data, most existing approaches
fail to leverage the rich semantic information in the text of questions and
concepts. This paper proposes Language model-based Knowledge Tracing (LKT), a
novel framework that integrates pre-trained language models (PLMs) with KT
methods. By leveraging the power of language models to capture semantic
representations, LKT effectively incorporates textual information and
significantly outperforms previous KT models on large benchmark datasets.
Moreover, we demonstrate that LKT can effectively address the cold-start
problem in KT by leveraging the semantic knowledge captured by PLMs.
Interpretability of LKT is enhanced compared to traditional KT models due to
its use of text-rich data. We conducted the local interpretable model-agnostic
explanation technique and analysis of attention scores to interpret the model
performance further. Our work highlights the potential of integrating PLMs with
KT and paves the way for future research in KT domain. |
---|---|
DOI: | 10.48550/arxiv.2406.02893 |