高级检索
    成科扬, 王宁, 师文喜, 詹永照. 深度学习可解释性研究进展[J]. 计算机研究与发展, 2020, 57(6): 1208-1217. DOI: 10.7544/issn1000-1239.2020.20190485
    引用本文: 成科扬, 王宁, 师文喜, 詹永照. 深度学习可解释性研究进展[J]. 计算机研究与发展, 2020, 57(6): 1208-1217. DOI: 10.7544/issn1000-1239.2020.20190485
    Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao. Research Advances in the Interpretability of Deep Learning[J]. Journal of Computer Research and Development, 2020, 57(6): 1208-1217. DOI: 10.7544/issn1000-1239.2020.20190485
    Citation: Cheng Keyang, Wang Ning, Shi Wenxi, Zhan Yongzhao. Research Advances in the Interpretability of Deep Learning[J]. Journal of Computer Research and Development, 2020, 57(6): 1208-1217. DOI: 10.7544/issn1000-1239.2020.20190485

    深度学习可解释性研究进展

    Research Advances in the Interpretability of Deep Learning

    • 摘要: 深度学习的可解释性研究是人工智能、机器学习、认知心理学、逻辑学等众多学科的交叉研究课题,其在信息推送、医疗研究、金融、信息安全等领域具有重要的理论研究意义和实际应用价值.从深度学习可解释性研究起源、研究探索期、模型构建期3方面回顾了深度学习可解释性研究历史,从可视化分析、鲁棒性扰动分析、敏感性分析3方面展现了深度学习现有模型可解释性分析研究现状,从模型代理、逻辑推理、网络节点关联分析、传统机器学习模型改进4方面剖析了可解释性深度学习模型构建研究,同时对当前该领域研究存在的不足作出了分析,展示了可解释性深度学习的典型应用,并对未来可能的研究方向作出了展望.

       

      Abstract: The research on the interpretability of deep learning is closely related to various disciplines such as artificial intelligence, machine learning, logic and cognitive psychology. It has important theoretical research significance and practical application value in too many fields, such as information push, medical research, finance, and information security. In the past few years, there were a lot of well studied work in this field, but we are still facing various issues. In this paper, we clearly review the history of deep learning interpretability research and related work. Firstly, we introduce the history of interpretable deep learning from following three aspects: origin of interpretable deep learning, research exploration stage and model construction stage. Then, the research situation is presented from three aspects, namely visual analysis, robust perturbation analysis and sensitivity analysis. The research on the construction of interpretable deep learning model is introduced following four aspects: model agent, logical reasoning, network node association analysis and traditional machine learning model. Moreover, the limitations of current research are analyzed and discussed in this paper. At last, we list the typical applications of the interpretable deep learning and forecast the possible future research directions of this field along with reasonable and suitable suggestions.

       

    /

    返回文章
    返回