Abstract:
Knowledge tracing is a data-driven learner modeling technology, which aims to predict learners’ knowledge mastery or future performance based on their historical learning data. Recently, with the support of deep learning algorithms, deep learning-based knowledge tracing has become a current research hotspot in the field. Aiming at the problems that deep learning-based knowledge tracing models generally have ‘black-box’ attributes, the decision-making process or results lack interpretability, and it is difficult to provide high-value education services such as learning attribution analysis and wrong cause backtracking, a Hierarchical Attention network based Knowledge Tracing model (HAKT) is proposed. By mining the multi-dimensional and in-depth semantic association between questions, a network structure containing three-layer attention of questions, semantics and elements is established, where graph attention neural network and self-attention mechanism are utilized for question representation learning, semantic fusion and questions retrieve. A regularization term to improve model interpretability is introduced into the loss function, with which a trade-off factor is incorporated to balance predictive performance and interpretability of model. Besides, we define an interpretability measurement index for the prediction results—fidelity, which can quantitatively evaluate the model interpretability. Finally, the experimental results on 6 benchmark datasets show that our method effectively improves the model interpretability.