ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (5): 964-976.doi: 10.7544/issn1000-1239.2021.20200978

所属专题: 2021人工智能安全与隐私保护技术专题

• 综述 • 上一篇    下一篇

神经网络水印技术研究进展

张颖君1,4,陈恺2,3,周赓1,4,吕培卓2,3,刘勇2,黄亮5   

  1. 1(中国科学院软件研究所可信计算与信息保障实验室 北京 100190);2(信息安全国家重点实验室(中国科学院信息工程研究所) 北京 100195);3(中国科学院大学网络空间安全学院 北京 100049);4(中国科学院大学计算机科学与技术学院 北京 100049);5(奇安信科技集团股份有限公司 北京 100015) (yingjun2011@iscas.ac.cn)
  • 出版日期: 2021-05-01
  • 基金资助: 
    国家自然科学基金重点项目(U1836211);国家自然科学基金项目(62072448);北京市自然科学基金项目(JQ18011);中国科学院青年创新促进会优秀会员(Y202046); 大数据协同安全国家工程实验室开放课题

Research Progress of Neural Networks Watermarking Technology

Zhang Yingjun1,4, Chen Kai2,3, Zhou Geng1,4, Lü Peizhuo2,3, Liu Yong2, Huang Liang5   

  1. 1(Trusted Computing and Information Assurance Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190);2(State Key Laboratory of Information Security(Institute of Information Engineering, Chinese Academy of Sciences),Beijing 100195);3(School of Cyber Security, University of Chinese Academy of Science, Beijing 100049);4(College of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 100049);5(Legendsec Information Technology(Beijing) Inc, Beijing 100015)
  • Online: 2021-05-01
  • Supported by: 
    This work was supported by the Key Program of the National Natural Science Foundation of China (U1836211), the National Natural Science Foundation of China(62072448),the Beijing Natural Science Foundation (JQ18011), the Excellent Member of Youth Innovation Promotion Association, Chinese Academy of Sciences (Y202046), and the Open Project of National Engineering Laboratory of Big Data Collaborative Security.

摘要: 随着深度神经网络的推广应用,训练后的神经网络模型已经成为一种重要的资产并为用户提供服务.服务商在提供服务的同时,也更多地关注其模型的版权保护,神经网络水印技术应运而生.首先,分析水印及其基本需求,并对神经网络水印涉及的相关技术进行介绍;对深度神经网络水印技术进行对比,并重点对白盒和黑盒水印进行详细分析;对神经网络水印攻击技术展开对比,并按照水印攻击目标的不同,对水印鲁棒性攻击、隐蔽性攻击、安全性攻击等技术进行分类介绍;最后对未来方向与挑战进行探讨.

关键词: 数字水印, 深度神经网络, 神经网络后门, 神经网络水印, 水印攻击

Abstract: With the popularization and application of deep neural networks, the trained neural network model has become an important asset and has been provided as machine learning services (MLaaS) for users. However, as a special kind of user, attackers can extract the models when using the services. Considering the high value of the models and risks of being stolen, service providers start to pay more attention to the copyright protection of their models. The main technique is adopted from the digital watermark and applied to neural networks, called neural network watermarking. In this paper, we first analyze this kind of watermarking and show the basic requirements of the design. Then we introduce the related technologies involved in neural network watermarking. Typically, service providers embed watermarks in the neural networks. Once they suspect a model is stolen from them, they can verify the existence of the watermark in the model. Sometimes, the providers can obtain the suspected model and check the existence of watermarks from the model parameters (white-box). But sometimes, the providers cannot acquire the model. What they can only do is to check the input/output pairs of the suspected model (black-box). We discuss these watermarking methods and potential attacks against the watermarks from the viewpoint of robustness, stealthiness, and security. In the end, we discuss future directions and potential challenges.

Key words: digital watermark, deep neural network, neural network backdoor, neural network watermark, attacks on the watermarking

中图分类号: