ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (12): 2618-2629.doi: 10.7544/issn1000-1239.2021.20211021

所属专题: 2021可解释智能学习方法及其应用专题

• 人工智能 • 上一篇    下一篇

可解释深度知识追踪模型

刘坤佳,李欣奕,唐九阳,赵翔   

  1. (国防科技大学信息系统工程重点实验室 长沙 410073) (kunjia_liu@nudt.edu.cn)
  • 出版日期: 2021-12-01
  • 基金资助: 
    国家重点研发计划项目(2020AAA0108800);国家自然科学基金项目(61872446, 71971212, 62002373);湖南省研究生科研创新项目(CX20200067)

Interpretable Deep Knowledge Tracing

Liu Kunjia, Li Xinyi, Tang Jiuyang, Zhao Xiang   

  1. (Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073)
  • Online: 2021-12-01
  • Supported by: 
    This work was supported by the National Key Research and Development Program of China (2020AAA0108800), the National Natural Science Foundation of China (61872446, 71971212, 62002373), and the Postgraduate Scientific Research Innovation Project of Hunan Province (CX20200067).

摘要: 知识追踪任务通过建模用户的习题作答序列跟踪其认知状态,进而预测其下一时刻的答题情况,实现对用户知识掌握程度的智能评估.当前知识追踪方法多针对知识点建模,忽略了习题信息建模与用户个性化表征,并且对于预测结果缺乏可解释性.针对以上问题,提出了一个可解释的深度知识追踪框架.首先引入习题的上下文信息挖掘习题与知识点间的隐含关系,得到更有表征能力的习题与知识点表示,缓解数据稀疏问题.接着建模用户答题序列获得其当前知识状态,并以此学习个性化注意力,进而得到当前习题基于用户知识状态的个性化表示.最后,对于预测结果,依据个性化注意力选择一条推理路径作为其解释.相较于现有方法,所提模型不仅取得了更好的预测结果,还能为预测结果提供推理路径层面的解释,体现了其优越性.

关键词: 可解释, 知识追踪, 个性化, 注意力, 上下文信息

Abstract: The task of knowledge tracing involves tracking users’ cognitive states by modeling their exercise-answering sequence, predicting their performance over time, and achieving an intelligent assessment of the users’ knowledge. Current works mainly model the skills related to the exercises, while ignoring the rich information contained in the contexts of exercises. Moreover, the current deep learning-based methods are agnostic, which undermines the explainability of the model. In this paper, we propose an interpretable deep knowledge tracking (IDKT) framework. First, we alleviate the data sparsity problem by using the contextual information of the exercises and skills to obtain more representative exercise and skill representations. Then the hidden knowledge states are fused with the aforementioned embeddings to learn a personalized attention, which is later used to aggregate neighbor embeddings in the exercise-skill graph. Finally, given a prediction result, an inference path is selected as the explanation based on the personalized attention. Compared with typical existing methods, IDKT exhibits its superiority by not only achieving the best prediction performance, but also providing an explanation at the inference path level for the prediction results.

Key words: interpretability, knowledge tracing, personalization, attention, context information

中图分类号: