高级检索
    陈宇飞, 沈超, 王骞, 李琦, 王聪, 纪守领, 李康, 管晓宏. 人工智能系统安全与隐私风险[J]. 计算机研究与发展, 2019, 56(10): 2135-2150. DOI: 10.7544/issn1000-1239.2019.20190415
    引用本文: 陈宇飞, 沈超, 王骞, 李琦, 王聪, 纪守领, 李康, 管晓宏. 人工智能系统安全与隐私风险[J]. 计算机研究与发展, 2019, 56(10): 2135-2150. DOI: 10.7544/issn1000-1239.2019.20190415
    Chen Yufei, Shen Chao, Wang Qian, Li Qi, Wang Cong, Ji Shouling, Li Kang, Guan Xiaohong. Security and Privacy Risks in Artificial Intelligence Systems[J]. Journal of Computer Research and Development, 2019, 56(10): 2135-2150. DOI: 10.7544/issn1000-1239.2019.20190415
    Citation: Chen Yufei, Shen Chao, Wang Qian, Li Qi, Wang Cong, Ji Shouling, Li Kang, Guan Xiaohong. Security and Privacy Risks in Artificial Intelligence Systems[J]. Journal of Computer Research and Development, 2019, 56(10): 2135-2150. DOI: 10.7544/issn1000-1239.2019.20190415

    人工智能系统安全与隐私风险

    Security and Privacy Risks in Artificial Intelligence Systems

    • 摘要: 人类正在经历着由深度学习技术推动的人工智能浪潮,它为人类生产和生活带来了巨大的技术革新.在某些特定领域中,人工智能已经表现出达到甚至超越人类的工作能力.然而,以往的机器学习理论大多没有考虑开放甚至对抗的系统运行环境,人工智能系统的安全和隐私问题正逐渐暴露出来.通过回顾人工智能系统安全方面的相关研究工作,揭示人工智能系统中潜藏的安全与隐私风险.首先介绍了包含攻击面、攻击能力和攻击目标的安全威胁模型.从人工智能系统的4个关键环节——数据输入(传感器)、数据预处理、机器学习模型和输出,分析了相应的安全隐私风险及对策.讨论了未来在人工智能系统安全研究方面的发展趋势.

       

      Abstract: Human society is witnessing a wave of artificial intelligence (AI) driven by deep learning techniques, bringing a technological revolution for human production and life. In some specific fields, AI has achieved or even surpassed human-level performance. However, most previous machine learning theories have not considered the open and even adversarial environments, and the security and privacy issues are gradually rising. Besides of insecure code implementations, biased models, adversarial examples, sensor spoofing can also lead to security risks which are hard to be discovered by traditional security analysis tools. This paper reviews previous works on AI system security and privacy, revealing potential security and privacy risks. Firstly, we introduce a threat model of AI systems, including attack surfaces, attack capabilities and attack goals. Secondly, we analyze security risks and counter measures in terms of four critical components in AI systems: data input (sensor), data preprocessing, machine learning model and output. Finally, we discuss future research trends on the security of AI systems. The aim of this paper is to arise the attention of the computer security society and the AI society on security and privacy of AI systems, and so that they can work together to unlock AI’s potential to build a bright future.

       

    /

    返回文章
    返回