高级检索
    纪守领, 李进锋, 杜天宇, 李博. 机器学习模型可解释性方法、应用与安全研究综述[J]. 计算机研究与发展, 2019, 56(10): 2071-2096. DOI: 10.7544/issn1000-1239.2019.20190540
    引用本文: 纪守领, 李进锋, 杜天宇, 李博. 机器学习模型可解释性方法、应用与安全研究综述[J]. 计算机研究与发展, 2019, 56(10): 2071-2096. DOI: 10.7544/issn1000-1239.2019.20190540
    Ji Shouling, Li Jinfeng, Du Tianyu, Li Bo. Survey on Techniques, Applications and Security of Machine Learning Interpretability[J]. Journal of Computer Research and Development, 2019, 56(10): 2071-2096. DOI: 10.7544/issn1000-1239.2019.20190540
    Citation: Ji Shouling, Li Jinfeng, Du Tianyu, Li Bo. Survey on Techniques, Applications and Security of Machine Learning Interpretability[J]. Journal of Computer Research and Development, 2019, 56(10): 2071-2096. DOI: 10.7544/issn1000-1239.2019.20190540

    机器学习模型可解释性方法、应用与安全研究综述

    Survey on Techniques, Applications and Security of Machine Learning Interpretability

    • 摘要: 尽管机器学习在许多领域取得了巨大的成功,但缺乏可解释性严重限制了其在现实任务尤其是安全敏感任务中的广泛应用.为了克服这一弱点,许多学者对如何提高机器学习模型可解释性进行了深入的研究,并提出了大量的解释方法以帮助用户理解模型内部的工作机制.然而,可解释性研究还处于初级阶段,依然还有大量的科学问题尚待解决.并且,不同的学者解决问题的角度不同,对可解释性赋予的含义也不同,所提出的解释方法也各有侧重.迄今为止,学术界对模型可解释性仍缺乏统一的认识,可解释性研究的体系结构尚不明确.在综述中,回顾了机器学习中的可解释性问题,并对现有的研究工作进行了系统的总结和科学的归类.同时,讨论了可解释性相关技术的潜在应用,分析了可解释性与可解释机器学习的安全性之间的关系,并且探讨了可解释性研究当前面临的挑战和未来潜在的研究方向,以期进一步推动可解释性研究的发展和应用.

       

      Abstract: While machine learning has achieved great success in various domains, the lack of interpretability has limited its widespread applications in real-world tasks, especially security-critical tasks. To overcome this crucial weakness, intensive research on improving the interpretability of machine learning models has emerged, and a plethora of interpretation methods have been proposed to help end users understand its inner working mechanism. However, the research on model interpretation is still in its infancy, and there are a large amount of scientific issues to be resolved. Furthermore, different researchers have different perspectives on solving the interpretation problem and give different definitions for interpretability, and the proposed interpretation methods also have different emphasis. Till now, the research community still lacks a comprehensive understanding of interpretability as well as a scientific guide for the research on model interpretation. In this survey, we review the explanatory problems in machine learning, and make a systematic summary and scientific classification of the existing research works. At the same time, we discuss the potential applications of interpretation related technologies, analyze the relationship between interpretability and the security of interpretable machine learning, and discuss the current research challenges and potential future research directions, aiming at providing necessary help for future researchers to facilitate the research and application of model interpretability.

       

    /

    返回文章
    返回