• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Zhang Yingjun, Chen Kai, Zhou Geng, Lü Peizhuo, Liu Yong, Huang Liang. Research Progress of Neural Networks Watermarking Technology[J]. Journal of Computer Research and Development, 2021, 58(5): 964-976. DOI: 10.7544/issn1000-1239.2021.20200978
Citation: Zhang Yingjun, Chen Kai, Zhou Geng, Lü Peizhuo, Liu Yong, Huang Liang. Research Progress of Neural Networks Watermarking Technology[J]. Journal of Computer Research and Development, 2021, 58(5): 964-976. DOI: 10.7544/issn1000-1239.2021.20200978

Research Progress of Neural Networks Watermarking Technology

Funds: This work was supported by the Key Program of the National Natural Science Foundation of China (U1836211), the National Natural Science Foundation of China(62072448),the Beijing Natural Science Foundation (JQ18011), the Excellent Member of Youth Innovation Promotion Association, Chinese Academy of Sciences (Y202046), and the Open Project of National Engineering Laboratory of Big Data Collaborative Security.
More Information
  • Published Date: April 30, 2021
  • With the popularization and application of deep neural networks, the trained neural network model has become an important asset and has been provided as machine learning services (MLaaS) for users. However, as a special kind of user, attackers can extract the models when using the services. Considering the high value of the models and risks of being stolen, service providers start to pay more attention to the copyright protection of their models. The main technique is adopted from the digital watermark and applied to neural networks, called neural network watermarking. In this paper, we first analyze this kind of watermarking and show the basic requirements of the design. Then we introduce the related technologies involved in neural network watermarking. Typically, service providers embed watermarks in the neural networks. Once they suspect a model is stolen from them, they can verify the existence of the watermark in the model. Sometimes, the providers can obtain the suspected model and check the existence of watermarks from the model parameters (white-box). But sometimes, the providers cannot acquire the model. What they can only do is to check the input/output pairs of the suspected model (black-box). We discuss these watermarking methods and potential attacks against the watermarks from the viewpoint of robustness, stealthiness, and security. In the end, we discuss future directions and potential challenges.
  • Related Articles

    [1]Xie Guo, Zhang Huaiwen, Wang Le, Liao Qing, Zhang Aoqian, Zhou Zhili, Ge Huilin, Wang Zhiheng, Wu Guozheng. Acceptance and Funding Status of Artificial Intelligence Discipline Projects Under the National Natural Science Foundation of China in 2024[J]. Journal of Computer Research and Development, 2025, 62(3): 648-661. DOI: 10.7544/issn1000-1239.202550008
    [2]Li Xu, Zhu Rui, Chen Xiaolei, Wu Jinxuan, Zheng Yi, Lai Chenghang, Liang Yuxuan, Li Bin, Xue Xiangyang. A Survey of Hallucinations in Large Vision-Language Models: Causes, Evaluations and Mitigations[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440444
    [3]Chen Xuanting, Ye Junjie, Zu Can, Xu Nuo, Gui Tao, Zhang Qi. Robustness of GPT Large Language Models on Natural Language Processing Tasks[J]. Journal of Computer Research and Development, 2024, 61(5): 1128-1142. DOI: 10.7544/issn1000-1239.202330801
    [4]Zhang Mi, Pan Xudong, Yang Min. JADE-DB:A Universal Testing Benchmark for Large Language Model Safety Based on Targeted Mutation[J]. Journal of Computer Research and Development, 2024, 61(5): 1113-1127. DOI: 10.7544/issn1000-1239.202330959
    [5]Shu Wentao, Li Ruixiao, Sun Tianxiang, Huang Xuanjing, Qiu Xipeng. Large Language Models: Principles, Implementation, and Progress[J]. Journal of Computer Research and Development, 2024, 61(2): 351-361. DOI: 10.7544/issn1000-1239.202330303
    [6]Yang Yi, Li Ying, Chen Kai. Vulnerability Detection Methods Based on Natural Language Processing[J]. Journal of Computer Research and Development, 2022, 59(12): 2649-2666. DOI: 10.7544/issn1000-1239.20210627
    [7]Pan Xudong, Zhang Mi, Yang Min. Fishing Leakage of Deep Learning Training Data via Neuron Activation Pattern Manipulation[J]. Journal of Computer Research and Development, 2022, 59(10): 2323-2337. DOI: 10.7544/issn1000-1239.20220498
    [8]Pan Xuan, Xu Sihan, Cai Xiangrui, Wen Yanlong, Yuan Xiaojie. Survey on Deep Learning Based Natural Language Interface to Database[J]. Journal of Computer Research and Development, 2021, 58(9): 1925-1950. DOI: 10.7544/issn1000-1239.2021.20200209
    [9]Zheng Haibin, Chen Jinyin, Zhang Yan, Zhang Xuhong, Ge Chunpeng, Liu Zhe, Ouyang Yike, Ji Shouling. Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Language Processing[J]. Journal of Computer Research and Development, 2021, 58(8): 1727-1750. DOI: 10.7544/issn1000-1239.2021.20210304
    [10]Wang Ye, Chen Junwu, Xia Xin, Jiang Bo. Intelligent Requirements Elicitation and Modeling: A Literature Review[J]. Journal of Computer Research and Development, 2021, 58(4): 683-705. DOI: 10.7544/issn1000-1239.2021.20200740

Catalog

    Article views (1395) PDF downloads (1232) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return