高级检索

    深度学习可解释性研究进展

    Research Advances in the Interpretability of Deep Learning

    • 摘要: 深度学习的可解释性研究是人工智能、机器学习、认知心理学、逻辑学等众多学科的交叉研究课题,其在信息推送、医疗研究、金融、信息安全等领域具有重要的理论研究意义和实际应用价值.从深度学习可解释性研究起源、研究探索期、模型构建期3方面回顾了深度学习可解释性研究历史,从可视化分析、鲁棒性扰动分析、敏感性分析3方面展现了深度学习现有模型可解释性分析研究现状,从模型代理、逻辑推理、网络节点关联分析、传统机器学习模型改进4方面剖析了可解释性深度学习模型构建研究,同时对当前该领域研究存在的不足作出了分析,展示了可解释性深度学习的典型应用,并对未来可能的研究方向作出了展望.

       

      Abstract: The research on the interpretability of deep learning is closely related to various disciplines such as artificial intelligence, machine learning, logic and cognitive psychology. It has important theoretical research significance and practical application value in too many fields, such as information push, medical research, finance, and information security. In the past few years, there were a lot of well studied work in this field, but we are still facing various issues. In this paper, we clearly review the history of deep learning interpretability research and related work. Firstly, we introduce the history of interpretable deep learning from following three aspects: origin of interpretable deep learning, research exploration stage and model construction stage. Then, the research situation is presented from three aspects, namely visual analysis, robust perturbation analysis and sensitivity analysis. The research on the construction of interpretable deep learning model is introduced following four aspects: model agent, logical reasoning, network node association analysis and traditional machine learning model. Moreover, the limitations of current research are analyzed and discussed in this paper. At last, we list the typical applications of the interpretable deep learning and forecast the possible future research directions of this field along with reasonable and suitable suggestions.

       

    /

    返回文章
    返回