高级检索

    基于多层注意力网络的可解释认知追踪方法

    Hierarchical Attention Network Based Interpretable Knowledge Tracing

    • 摘要: 认知追踪是一种数据驱动的学习主体建模技术,旨在根据学生历史答题数据预测其知识掌握状态或未来答题表现.近年来,在深度学习算法的加持下,深度认知追踪成为当前该领域的研究热点.针对深度认知追踪模型普遍存在黑箱属性,决策过程或结果缺乏可解释性,难以提供学习归因分析、错因追溯等高价值教育服务等问题,提出一种基于多层注意力网络的认知追踪模型.通过挖掘题目之间多维度、深层次的语义关联信息,建立一种包含题目元素、语义和记录等3层注意力的网络结构,利用图注意神经网络和自注意力机制等对题目进行嵌入表示、语义融合和记录检索.特别是在损失函数中引入提升模型可解释性的正则化项与权衡因子,实现对模型预测性能与可解释强度的调控.同时,定义了预测结果可解释性度量指标——保真度,实现对认知追踪模型可解释性的量化评估.最后,在6个领域基准数据集上的实验结果表明:该方法有效提升了模型的可解释性.

       

      Abstract: Knowledge tracing is a data-driven learner modeling technology, which aims to predict learners’ knowledge mastery or future performance based on their historical learning data. Recently, with the support of deep learning algorithms, deep learning-based knowledge tracing has become a current research hotspot in the field. Aiming at the problems that deep learning-based knowledge tracing models generally have ‘black-box’ attributes, the decision-making process or results lack interpretability, and it is difficult to provide high-value education services such as learning attribution analysis and wrong cause backtracking, a Hierarchical Attention network based Knowledge Tracing model (HAKT) is proposed. By mining the multi-dimensional and in-depth semantic association between questions, a network structure containing three-layer attention of questions, semantics and elements is established, where graph attention neural network and self-attention mechanism are utilized for question representation learning, semantic fusion and questions retrieve. A regularization term to improve model interpretability is introduced into the loss function, with which a trade-off factor is incorporated to balance predictive performance and interpretability of model. Besides, we define an interpretability measurement index for the prediction results—fidelity, which can quantitatively evaluate the model interpretability. Finally, the experimental results on 6 benchmark datasets show that our method effectively improves the model interpretability.

       

    /

    返回文章
    返回