SPAKT: A Self-Supervised Pre-TrAining Method for Knowledge Tracing

Knowledge tracing (KT) is the core task of computer-aided education systems, and it aims at predicting whether a student can answer the next exercise (i.e., question) correctly based on his/her historical answer records. In recent years, deep neural network-based approaches have been widely develope...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.72145-72154
Hauptverfasser: Ma, Yuling, Han, Peng, Qiao, Huiyan, Cui, Chaoran, Yin, Yilong, Yu, Dehu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Knowledge tracing (KT) is the core task of computer-aided education systems, and it aims at predicting whether a student can answer the next exercise (i.e., question) correctly based on his/her historical answer records. In recent years, deep neural network-based approaches have been widely developed in KT and achieved promising results. More recently, several researches further boost these KT models via exploiting plentiful relationships including exercise-skill relations (E-S), the exercise similarity (E-E) as well as skill similarity (S-S). However, these relationship information are frequently absent in many real-world educational applications, and it is a labor-intensive work for human experts to label it. Inspired by recent advances in natural language processing domain, we propose a novel pre-training approach, namely as SPAKT, and utilize self-supervised learning to pre-train exercise embedding representation without the need for expensive human-expert annotations in this paper. Contrary to existing pre-training methods that highly rely on manually labeling knowledge about the E-E, S-S, or E-S relationships, the core idea of the proposed SPAKT is to design three self-attention modules to model the E-S, E-E, and S-S relationships, respectively, and all of these three modules can be trained in the self-supervised setting. As a pre-training approach, our SPAKT can be effortlessly incorporated into existing deep neural network-based KT frameworks. We experimentally show that, even without using expensive annotations about the aforementioned three kinds of relationships, our model achieves competitive performance compared with state-of-the-arts. Our algorithm implementations have been made publicly available at https://github.com/Vinci-hp/pretrainKT .
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3187987