ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (12): 2630-2644.doi: 10.7544/issn1000-1239.2021.20210997

所属专题: 2021可解释智能学习方法及其应用专题

• 人工智能 • 上一篇    下一篇

基于多层注意力网络的可解释认知追踪方法

孙建文1,2,周建鹏1,2,刘三女牙1,2,何绯娟3,唐云4   

  1. 1(华中师范大学人工智能教育学部 武汉 430079);2(教育大数据应用技术国家工程实验室(华中师范大学) 武汉 430079);3(西安交通大学城市学院计算机系 西安 710018);4(华中师范大学心理学院 武汉 430079) (sunjw@ccnu.edu.cn)
  • 出版日期: 2021-12-01
  • 基金资助: 
    国家科技创新2030新一代人工智能重大项目(2020AAA0108804);国家自然科学基金项目(62077021,61977030,61937001,61807011);陕西省自然科学基础研究计划项目(2020JM-711);陕西省教育科学“十三五”规划课题(SGH20Y1397);西安交通大学城市学院课程思政专项研究项目(KCSZ01006);华中师范大学研究生教学改革研究项目(2020JG14)

Hierarchical Attention Network Based Interpretable Knowledge Tracing

Sun Jianwen1,2, Zhou Jianpeng1,2, Liu Sannüya1,2, He Feijuan3, Tang Yun4   

  1. 1(Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079);2(National Engineering Laboratory for Educational Big Data(Central China Normal University), Wuhan 430079);3(Department of Computer, Xi’an Jiaotong University City College, Xi’an 710018);4(School of Psychology, Central China Normal University, Wuhan 430079)
  • Online: 2021-12-01
  • Supported by: 
    This work was supported by the Major Program of National Science and Technology Innovation 2030 of China for New Generation of Artificial Intelligence (2020AAA0108804), the National Natural Science Foundation of China (62077021, 61977030, 61937001, 61807011), the Natural Science Basic Research Program of Shaanxi Province (2020JM-711), the Shaanxi Provincial Education Science Regulations “Thirteenth Five-Year” Plan Project (SGH20Y1397), the Special Research Project of Xi’an Jiaotong University City College (KCSZ01006), and the Teaching Reform Research Project for Postgraduates of Central China Normal University (2020JG14).

摘要: 认知追踪是一种数据驱动的学习主体建模技术,旨在根据学生历史答题数据预测其知识掌握状态或未来答题表现.近年来,在深度学习算法的加持下,深度认知追踪成为当前该领域的研究热点.针对深度认知追踪模型普遍存在黑箱属性,决策过程或结果缺乏可解释性,难以提供学习归因分析、错因追溯等高价值教育服务等问题,提出一种基于多层注意力网络的认知追踪模型.通过挖掘题目之间多维度、深层次的语义关联信息,建立一种包含题目元素、语义和记录等3层注意力的网络结构,利用图注意神经网络和自注意力机制等对题目进行嵌入表示、语义融合和记录检索.特别是在损失函数中引入提升模型可解释性的正则化项与权衡因子,实现对模型预测性能与可解释强度的调控.同时,定义了预测结果可解释性度量指标——保真度,实现对认知追踪模型可解释性的量化评估.最后,在6个领域基准数据集上的实验结果表明:该方法有效提升了模型的可解释性.

关键词: 认知追踪, 可解释性, 多层注意力, 题目语义, 保真度

Abstract: Knowledge tracing is a data-driven learner modeling technology, which aims to predict learners’ knowledge mastery or future performance based on their historical learning data. Recently, with the support of deep learning algorithms, deep learning-based knowledge tracing has become a current research hotspot in the field. Aiming at the problems that deep learning-based knowledge tracing models generally have ‘black-box’ attributes, the decision-making process or results lack interpretability, and it is difficult to provide high-value education services such as learning attribution analysis and wrong cause backtracking, a Hierarchical Attention network based Knowledge Tracing model (HAKT) is proposed. By mining the multi-dimensional and in-depth semantic association between questions, a network structure containing three-layer attention of questions, semantics and elements is established, where graph attention neural network and self-attention mechanism are utilized for question representation learning, semantic fusion and questions retrieve. A regularization term to improve model interpretability is introduced into the loss function, with which a trade-off factor is incorporated to balance predictive performance and interpretability of model. Besides, we define an interpretability measurement index for the prediction results—fidelity, which can quantitatively evaluate the model interpretability. Finally, the experimental results on 6 benchmark datasets show that our method effectively improves the model interpretability.

Key words: knowledge tracing, interpretability, hierarchical attention, question semantic, fidelity

中图分类号: