高级检索
    张颖君, 陈恺, 周赓, 吕培卓, 刘勇, 黄亮. 神经网络水印技术研究进展[J]. 计算机研究与发展, 2021, 58(5): 964-976. DOI: 10.7544/issn1000-1239.2021.20200978
    引用本文: 张颖君, 陈恺, 周赓, 吕培卓, 刘勇, 黄亮. 神经网络水印技术研究进展[J]. 计算机研究与发展, 2021, 58(5): 964-976. DOI: 10.7544/issn1000-1239.2021.20200978
    Zhang Yingjun, Chen Kai, Zhou Geng, Lü Peizhuo, Liu Yong, Huang Liang. Research Progress of Neural Networks Watermarking Technology[J]. Journal of Computer Research and Development, 2021, 58(5): 964-976. DOI: 10.7544/issn1000-1239.2021.20200978
    Citation: Zhang Yingjun, Chen Kai, Zhou Geng, Lü Peizhuo, Liu Yong, Huang Liang. Research Progress of Neural Networks Watermarking Technology[J]. Journal of Computer Research and Development, 2021, 58(5): 964-976. DOI: 10.7544/issn1000-1239.2021.20200978

    神经网络水印技术研究进展

    Research Progress of Neural Networks Watermarking Technology

    • 摘要: 随着深度神经网络的推广应用,训练后的神经网络模型已经成为一种重要的资产并为用户提供服务.服务商在提供服务的同时,也更多地关注其模型的版权保护,神经网络水印技术应运而生.首先,分析水印及其基本需求,并对神经网络水印涉及的相关技术进行介绍;对深度神经网络水印技术进行对比,并重点对白盒和黑盒水印进行详细分析;对神经网络水印攻击技术展开对比,并按照水印攻击目标的不同,对水印鲁棒性攻击、隐蔽性攻击、安全性攻击等技术进行分类介绍;最后对未来方向与挑战进行探讨.

       

      Abstract: With the popularization and application of deep neural networks, the trained neural network model has become an important asset and has been provided as machine learning services (MLaaS) for users. However, as a special kind of user, attackers can extract the models when using the services. Considering the high value of the models and risks of being stolen, service providers start to pay more attention to the copyright protection of their models. The main technique is adopted from the digital watermark and applied to neural networks, called neural network watermarking. In this paper, we first analyze this kind of watermarking and show the basic requirements of the design. Then we introduce the related technologies involved in neural network watermarking. Typically, service providers embed watermarks in the neural networks. Once they suspect a model is stolen from them, they can verify the existence of the watermark in the model. Sometimes, the providers can obtain the suspected model and check the existence of watermarks from the model parameters (white-box). But sometimes, the providers cannot acquire the model. What they can only do is to check the input/output pairs of the suspected model (black-box). We discuss these watermarking methods and potential attacks against the watermarks from the viewpoint of robustness, stealthiness, and security. In the end, we discuss future directions and potential challenges.

       

    /

    返回文章
    返回