• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

深度学习可解释性研究进展

成科扬, 王宁, 师文喜, 詹永照

成科扬, 王宁, 师文喜, 詹永照. 深度学习可解释性研究进展[J]. 计算机研究与发展, 2020, 57(6): 1208-1217. DOI: 10.7544/issn1000-1239.2020.20190485
引用本文: 成科扬, 王宁, 师文喜, 詹永照. 深度学习可解释性研究进展[J]. 计算机研究与发展, 2020, 57(6): 1208-1217. DOI: 10.7544/issn1000-1239.2020.20190485
Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao. Research Advances in the Interpretability of Deep Learning[J]. Journal of Computer Research and Development, 2020, 57(6): 1208-1217. DOI: 10.7544/issn1000-1239.2020.20190485
Citation: Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao. Research Advances in the Interpretability of Deep Learning[J]. Journal of Computer Research and Development, 2020, 57(6): 1208-1217. DOI: 10.7544/issn1000-1239.2020.20190485
成科扬, 王宁, 师文喜, 詹永照. 深度学习可解释性研究进展[J]. 计算机研究与发展, 2020, 57(6): 1208-1217. CSTR: 32373.14.issn1000-1239.2020.20190485
引用本文: 成科扬, 王宁, 师文喜, 詹永照. 深度学习可解释性研究进展[J]. 计算机研究与发展, 2020, 57(6): 1208-1217. CSTR: 32373.14.issn1000-1239.2020.20190485
Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao. Research Advances in the Interpretability of Deep Learning[J]. Journal of Computer Research and Development, 2020, 57(6): 1208-1217. CSTR: 32373.14.issn1000-1239.2020.20190485
Citation: Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao. Research Advances in the Interpretability of Deep Learning[J]. Journal of Computer Research and Development, 2020, 57(6): 1208-1217. CSTR: 32373.14.issn1000-1239.2020.20190485

深度学习可解释性研究进展

基金项目: 国家自然科学基金项目(61972183,61672268);社会安全风险感知与防控大数据应用国家工程实验室主任基金项目
详细信息
  • 中图分类号: TP391

Research Advances in the Interpretability of Deep Learning

Funds: This work was supported by the National Natural Science Foundation of China (61972183, 61672268) and the Director Foundation Project of National Engineering Laboratory for Public Safety Risk Perception and Control by the Big Data.
  • 摘要: 深度学习的可解释性研究是人工智能、机器学习、认知心理学、逻辑学等众多学科的交叉研究课题,其在信息推送、医疗研究、金融、信息安全等领域具有重要的理论研究意义和实际应用价值.从深度学习可解释性研究起源、研究探索期、模型构建期3方面回顾了深度学习可解释性研究历史,从可视化分析、鲁棒性扰动分析、敏感性分析3方面展现了深度学习现有模型可解释性分析研究现状,从模型代理、逻辑推理、网络节点关联分析、传统机器学习模型改进4方面剖析了可解释性深度学习模型构建研究,同时对当前该领域研究存在的不足作出了分析,展示了可解释性深度学习的典型应用,并对未来可能的研究方向作出了展望.
    Abstract: The research on the interpretability of deep learning is closely related to various disciplines such as artificial intelligence, machine learning, logic and cognitive psychology. It has important theoretical research significance and practical application value in too many fields, such as information push, medical research, finance, and information security. In the past few years, there were a lot of well studied work in this field, but we are still facing various issues. In this paper, we clearly review the history of deep learning interpretability research and related work. Firstly, we introduce the history of interpretable deep learning from following three aspects: origin of interpretable deep learning, research exploration stage and model construction stage. Then, the research situation is presented from three aspects, namely visual analysis, robust perturbation analysis and sensitivity analysis. The research on the construction of interpretable deep learning model is introduced following four aspects: model agent, logical reasoning, network node association analysis and traditional machine learning model. Moreover, the limitations of current research are analyzed and discussed in this paper. At last, we list the typical applications of the interpretable deep learning and forecast the possible future research directions of this field along with reasonable and suitable suggestions.
  • 期刊类型引用(1)

    1. 李鹏,闵慧,罗爱静,瞿昊宇,伊娜,许家祺. 改进的动态PPI网络构建与蛋白质功能预测算法. 计算机工程. 2020(12): 52-59 . 百度学术

    其他类型引用(3)

计量
  • 文章访问数:  4812
  • HTML全文浏览量:  33
  • PDF下载量:  3099
  • 被引次数: 4
出版历程
  • 发布日期:  2020-05-31

目录

    /

    返回文章
    返回