qDKT: Question-centric Deep Knowledge Tracing
Knowledge tracing (KT) models, e.g., the deep knowledge tracing (DKT) model, track an individual learner's acquisition of skills over time by examining the learner's performance on questions related to those skills. A practical limitation in most existing KT models is that all questions ne...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Knowledge tracing (KT) models, e.g., the deep knowledge tracing (DKT) model,
track an individual learner's acquisition of skills over time by examining the
learner's performance on questions related to those skills. A practical
limitation in most existing KT models is that all questions nested under a
particular skill are treated as equivalent observations of a learner's ability,
which is an inaccurate assumption in real-world educational scenarios. To
overcome this limitation we introduce qDKT, a variant of DKT that models every
learner's success probability on individual questions over time. First, qDKT
incorporates graph Laplacian regularization to smooth predictions under each
skill, which is particularly useful when the number of questions in the dataset
is big. Second, qDKT uses an initialization scheme inspired by the fastText
algorithm, which has found success in a variety of language modeling tasks. Our
experiments on several real-world datasets show that qDKT achieves state-of-art
performance on predicting learner outcomes. Because of this, qDKT can serve as
a simple, yet tough-to-beat, baseline for new question-centric KT models. |
---|---|
DOI: | 10.48550/arxiv.2005.12442 |