高级检索

    可解释深度知识追踪模型

    Interpretable Deep Knowledge Tracing

    • 摘要: 知识追踪任务通过建模用户的习题作答序列跟踪其认知状态,进而预测其下一时刻的答题情况,实现对用户知识掌握程度的智能评估.当前知识追踪方法多针对知识点建模,忽略了习题信息建模与用户个性化表征,并且对于预测结果缺乏可解释性.针对以上问题,提出了一个可解释的深度知识追踪框架.首先引入习题的上下文信息挖掘习题与知识点间的隐含关系,得到更有表征能力的习题与知识点表示,缓解数据稀疏问题.接着建模用户答题序列获得其当前知识状态,并以此学习个性化注意力,进而得到当前习题基于用户知识状态的个性化表示.最后,对于预测结果,依据个性化注意力选择一条推理路径作为其解释.相较于现有方法,所提模型不仅取得了更好的预测结果,还能为预测结果提供推理路径层面的解释,体现了其优越性.

       

      Abstract: The task of knowledge tracing involves tracking users’ cognitive states by modeling their exercise-answering sequence, predicting their performance over time, and achieving an intelligent assessment of the users’ knowledge. Current works mainly model the skills related to the exercises, while ignoring the rich information contained in the contexts of exercises. Moreover, the current deep learning-based methods are agnostic, which undermines the explainability of the model. In this paper, we propose an interpretable deep knowledge tracking (IDKT) framework. First, we alleviate the data sparsity problem by using the contextual information of the exercises and skills to obtain more representative exercise and skill representations. Then the hidden knowledge states are fused with the aforementioned embeddings to learn a personalized attention, which is later used to aggregate neighbor embeddings in the exercise-skill graph. Finally, given a prediction result, an inference path is selected as the explanation based on the personalized attention. Compared with typical existing methods, IDKT exhibits its superiority by not only achieving the best prediction performance, but also providing an explanation at the inference path level for the prediction results.

       

    /

    返回文章
    返回