• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Jin Biao, Lin Xiang, Xiong Jinbo, You Weijing, Li Xuan, Yao Zhiqiang. Intellectual Property Protection of Deep Neural Network Models Based on Watermarking Technology[J]. Journal of Computer Research and Development, 2024, 61(10): 2587-2606. DOI: 10.7544/issn1000-1239.202440413
Citation: Jin Biao, Lin Xiang, Xiong Jinbo, You Weijing, Li Xuan, Yao Zhiqiang. Intellectual Property Protection of Deep Neural Network Models Based on Watermarking Technology[J]. Journal of Computer Research and Development, 2024, 61(10): 2587-2606. DOI: 10.7544/issn1000-1239.202440413

Intellectual Property Protection of Deep Neural Network Models Based on Watermarking Technology

Funds: This work was supported by the National Natural Science Foundation of China (62272102, 62272103, 62202102), the Key Project of the Natural Science Foundation of Fujian Province (2023J02014), and the Natural Science Foundation of Fujian Province (2023J01531, 2023J01295).
More Information
  • Author Bio:

    Jin Biao: born in 1985. PhD, associate professor, master supervisor. Member of CCF. His main research interests include privacy preserving and machine learning

    Lin Xiang: born in 1996. Master. His main research interest includes artificial intelligence security

    Xiong Jinbo: born in 1981. PhD, professor, PhD supervisor. Senior member of CCF. His main research interests include data security and privacy preserving, and artificial intelligence security

    You Weijing: born in 1994. PhD, associate professor, master supervisor. Member of CCF. Her main research interests include data security, cloud storage security, and artificial intelligence security

    Li Xuan: born in 1984. PhD, associate professor, master supervisor. Member of CCF. Her main research interests include data security, cloud computing security, and privacy preserving

    Yao Zhiqiang: born in 1967. PhD, professor, PhD supervisor. Senior member of CCF. His main research interests include application security, big data security, and privacy preserving

  • Received Date: May 30, 2024
  • Revised Date: July 17, 2024
  • Available Online: September 13, 2024
  • Constructing an excellent deep neural network (DNN) model requires a large amount of training data, high-performance equipment, and profound expertise and knowledge. Therefore, DNN models should be regarded as the intellectual property (IP) of their owners. Protecting the IP of a DNN model also underscores the appreciation for the value of the data elements integral to its development and training process. However, DNN models are vulnerable to attacks such as theft, tampering, and illegal dissemination by malicious users. The quest for effective strategies to protect their IP has emerged as a pivotal area of academic research and an urgent challenge confronting the industry. Unlike existing related reviews, we focus on the application scenarios of DNN model watermarking. We mainly review the methodologies for DNN model IP protection based on watermarking technology from two dimensions: robust model watermarking for model copyright declaration and fragile model watermarking for model integrity verification. We discuss their characteristics, advantages, and limitations. Additionally, we elaborate on the practical application of DNN model watermarking technology. Finally, by summarizing the common technologies of various methods, we prognosticate future research directions for DNN model IP protection.

  • [1]
    浪潮信息: 浪潮电子信息产业股份有限公司 . 2022−2023中国人工智能计算力发展评估报告[R/OL]. [2023-11-15]. http://www.inspur.com/lcjtww/resource/cms/article/2448319/2734787/2022122601.pdf

    Inspur information. Inspur Electronic Information Industry Co., Ltd. Evaluation Report on the Development of China’s AI Computing Power (2022-2023) [R/OL]. [2023-11-15]. https://www.inspur.com/lcjtww/resource/cms/article/2448319/2734787/2022122601.pdf
    [2]
    Baştanlar Y, Özuysal M. Introduction to machine learning[J]. miRNomics: MicroRNA Biology and Computational Analysis, 2014: 105−128. https://link.springer.com/protocol/10.1007/978-1-62703-748-8_7#citeas
    [3]
    Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training[J/OL]. [2023-11-15]. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
    [4]
    Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9
    [5]
    Brown T, Mann B, Ryder N, et al. Language models are few-shot learners[J]. Advances in Neural Information Processing Systems, 2020(33): 1877−1901
    [6]
    Cecil R R, Soares J. IBM Watson Studio: A Platform to Transform Data to Intelligence[M]//Pharmaceutical Supply Chains-Medicines Shortages. Cham: Springer, 2019: 183−192
    [7]
    Tramèr F, Zhang Fang, Juels A, et al. Stealing machine learning models via prediction {APIs}[C]//Proc of the 25th USENIX Security Symp (USENIX Security 16). Berkeley, CA: USENIX Association, 2016: 601−618
    [8]
    李凤华,李晖,牛犇,等. 数据要素流通与安全的研究范畴与未来发展趋势[J]. 通信学报,2024,45(5):1−11

    Li Fenghua, Li Hui, Niu Ben, et al. Research category and future development trend of data elements circulation and security[J]. Journal on Communications, 2024, 45(5): 1−11 (in Chinese)
    [9]
    王宝楠,胡风,张焕国,等. 从演化密码到量子人工智能密码综述[J]. 计算机研究与发展,2019,56(10):2112−2134 doi: 10.7544/issn1000-1239.2019.20190374

    Wang Baonan, Hu Feng, Zhang Huanguo, et al. From evolutionary cryptography to quantum artificial intelligent cryptography[J]. Journal of Computer Research and Development, 2019, 56(10): 2112−2134 (in Chinese) doi: 10.7544/issn1000-1239.2019.20190374
    [10]
    Van Schyndel R G, Tirkel A Z, Osborne C F. A digital watermark[C]//Proc of 1st Int Conf on Image Processing. Piscataway, NJ: IEEE, 1994: 86−90
    [11]
    Zhu M, Gupta S. To prune, or not to prune: Exploring the efficacy of pruning for model compression[J]. arXiv preprint, arXiv: 1710.01878, 2017
    [12]
    Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network[J]. arXiv preprint, arXiv: 1503.02531, 2015
    [13]
    蒋瀚,徐秋亮. 基于云计算服务的安全多方计算[J]. 计算机研究与发展,2016,53(10):2152−2162 doi: 10.7544/issn1000-1239.2016.20160685

    Jiang Han, Xu Qiuliang. Secure multiparty computation in cloud computing[J]. Journal of Computer Research and Development, 2016, 53(10): 2152−2162 (in Chinese) doi: 10.7544/issn1000-1239.2016.20160685
    [14]
    范科峰,莫玮,曹山,等. 数字版权管理技术及应用研究进展[J]. 电子学报,2007,35(6):1139−1147 doi: 10.3321/j.issn:0372-2112.2007.06.027

    Fan Kefeng, Mo Wei, Cao Shan, et al. Advances in digital rights management technology and application[J]. Acta Electonica Sinica, 2007, 35(6): 1139−1147 (in Chinese) doi: 10.3321/j.issn:0372-2112.2007.06.027
    [15]
    Uchida Y, Nagai Y, Sakazawa S, et al. Embedding watermarks into deep neural networks[C]//Proc of the 2017 ACM on Int Conf on Multimedia Retrieval. New York: ACM, 2017: 269−277
    [16]
    张颖君,陈恺,周赓,等. 神经网络水印技术研究进展[J]. 计算机研究与发展,2021,58(5):964−976 doi: 10.7544/issn1000-1239.2021.20200978

    Zhang Yingjun, Chen Kai, Zhou Geng, et al. Research progress of neural networks watermarking technology[J]. Journal of Computer Research and Development, 2021, 58(5): 964−976 (in Chinese) doi: 10.7544/issn1000-1239.2021.20200978
    [17]
    樊雪峰,周晓谊,朱冰冰,等. 深度神经网络模型版权保护方案综述[J]. 计算机研究与发展,2022,59(5):953−977 doi: 10.7544/issn1000-1239.20211115

    Fan Xuefeng, Zhou Xiaoyi, Zhu Bingbing, et al. Survey of copyright protection schemes based on DNN model[J]. Journal of Computer Research and Development, 2022, 59(5): 953−977 (in Chinese) doi: 10.7544/issn1000-1239.20211115
    [18]
    吴汉舟,张杰,李越,等. 人工智能模型水印研究进展[J]. 中国图象图形学报,2023,28(6):1792−1810 doi: 10.11834/jig.230010

    Wu Hanzhou, Zhang Jie, Li Yue, et al. Overview of artificial intelligence model watermarking[J]. Journal of Image and Graphics, 2023, 28(6): 1792−1810 (in Chinese) doi: 10.11834/jig.230010
    [19]
    Li Yue, Wang Hongxia, Barni M. A survey of deep neural network watermarking techniques[J]. Neurocomputing, 2021, 461: 171−193 doi: 10.1016/j.neucom.2021.07.051
    [20]
    Lukas N, Jiang E, Li Xinda, et al. Sok: How robust is image classification deep neural network watermarking?[C]//Proc of 2022 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2022: 787−804
    [21]
    Zhang Xinpeng, Wang Shuozhong. Fragile watermarking with error-free restoration capability[J]. IEEE Transactions on Multimedia, 2008, 10(8): 1490−1499 doi: 10.1109/TMM.2008.2007334
    [22]
    Adi Y, Baum C, Cisse M, et al. Turning your weakness into a strength: Watermarking deep neural networks by backdooring[C]//Proc of the 27th USENIX Security Symp (USENIX Security 18). Berkeley, CA: USENIX Association, 2018: 1615−1631
    [23]
    Chen Huili, Rouhani B D, Koushanfar F. BlackMarks: Blackbox multibit watermarking for deep neural networks[J]. arXiv preprint, arXiv: 1904.00344, 2019
    [24]
    Wang Tianhao, Kerschbaum F. Attacks on digital watermarks for deep neural networks[C]//Proc of 2019 IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP 2019). Piscataway, NJ: IEEE, 2019: 2622−2626
    [25]
    Namba R, Sakuma J. Robust watermarking of neural network with exponential weighting[C]//Proc of the 2019 ACM Asia Conf on Computer and Communications Security. New York: ACM, 2019: 228−240
    [26]
    Wang Bolun, Yao Yuanshun, Shan S, et al. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks[C]//Proc of 2019 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2019: 707−723
    [27]
    王尚,李昕,宋永立,等. 基于自定义后门的触发器样本检测方案[J]. 信息安全学报,2022,7(6):48−61

    Wang Shang, Li Xin, Song Yongli, et al. A trigger sample detection scheme based on custom backdoor behaviors[J]. Journal of Cyber Security, 2022, 7(6): 48−61 (in Chinese)
    [28]
    Fan Lixin, Ng K W, Chan C S. Rethinking deep neural network ownership verification: Embedding passports to defeat ambiguity attacks[C]//Proc of the 33rd Int Conf on Neural Information Processing Systems. New York: Curran Associates, Inc, 2019: 4714−4723
    [29]
    Chen Huili, Rouhani B D, Fu Cheng, et al. Deepmarks: A secure fingerprinting framework for digital rights management of deep learning models[C]//Proc of the 2019 on Int Conf on Multimedia Retrieval. New York: ACM, 2019: 105−113
    [30]
    Tajbakhsh N, Shin J Y, Gurudu S R, et al. Convolutional neural networks for medical image analysis: Full training or fine tuning?[J]. IEEE Transactions on Medical Imaging, 2016, 35(5): 1299−1312 doi: 10.1109/TMI.2016.2535302
    [31]
    Darvish Rouhani B, Chen Huili, Koushanfar F. Deepsigns: An end-to-end watermarking framework for ownership protection of deep neural networks[C]//Proc of the 24th Int Conf on Architectural Support for Programming Languages and Operating Systems. New York: ACM, 2019: 485−497
    [32]
    Wang Tianhao, Kerschbaum F. RIGA: Covert and robust white-box watermarking of deep neural networks[C]//Proc of the Web Conf 2021. New York: ACM, 2021: 993−1004
    [33]
    Feng Le, Zhang Xinpeng. Watermarking neural network with compensation mechanism[C]//Proc of the 13th Int Conf on Knowledge Science, Engineering and Management (KSEM 2020). Berlin: Springer, 2020: 363−375
    [34]
    Kuribayashi M, Tanaka T, Suzuki S, et al. White-box watermarking scheme for fully-connected layers in fine-tuning model[C]//Proc of the 2021 ACM Workshop on Information Hiding and Multimedia Security. New York: ACM, 2021: 165−170
    [35]
    Chen B, Wornell G W. Quantization index modulation: A class of provably good methods for digital watermarking and information embedding[J]. IEEE Transactions on Information Theory, 2001, 47(4): 1423−1443 doi: 10.1109/18.923725
    [36]
    Kuribayashi M, Yasui T, Malik A. White box watermarking for convolution layers in fine-tuning model using the constant weight code[J]. Journal of Imaging, 2023, 9(6): 117 doi: 10.3390/jimaging9060117
    [37]
    Lv Peizhou, Li Pan, Zhang Shengzhi, et al. A robustness-assured white-box watermark in neural networks[J]. IEEE Transactions on Dependable and Secure Computing, 2023, 20(6): 5214−5229 doi: 10.1109/TDSC.2023.3242737
    [38]
    Yan Yifan, Pan Xudong, Zhang Mi, et al. Rethinking white-box watermarks on deep learning models under neural structural obfuscation[C]//Proc of the 32th USENIX Security Symp (USENIX Security 23). Berkeley, CA: USENIX Association, 2023: 2347−2364
    [39]
    Liu Yingqi, Ma Shiqing, Aafer Y, et al. Trojaning attack on neural networks[C]//Proc of the 25th Annual Network And Distributed System Security Symp (NDSS 2018). Berkeley, CA: Internet Soc, 2018: 1−15
    [40]
    Zhang Jialong, Gu Zhongshu, Jang Jiyong, et al. Protecting intellectual property of deep neural networks with watermarking[C]//Proc of the 2018 on Asia Conf on Computer and Communications Security. New York: ACM, 2018: 159−172
    [41]
    Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit confidence information and basic countermeasures[C]//Proc of the 22nd ACM SIGSAC Conf on Computer and Communications Security. New York: ACM, 2015: 1322−1333
    [42]
    Li Zheng, Hu Chengyu, Zhang Yang, et al. How to prove your model belongs to you: A blind-watermark based framework to protect intellectual property of DNN[C]//Proc of the 35th Annual Computer Security Applications Conf. New York: ACM, 2019: 126−137
    [43]
    Liu Yong, Wu Hanzhou, Zhang Xinpeng. Robust and imperceptible black-box DNN watermarking based on Fourier perturbation analysis and frequency sensitivity clustering[J]. IEEE Transactions on Dependable and Secure Computing, 2024: 1−14
    [44]
    Lounici S, Njeh M, Ermis O, et al. Yes we can: Watermarking machine learning models beyond classification[C]//Proc of 2021 IEEE 34th Computer Security Foundations Symp (CSF). Piscataway, NJ: IEEE, 2021: 1−14
    [45]
    Li Li, Zhang Weiming, Barni M. Universal BlackMarks: Key-image-free blackbox multi-bit watermarking of deep neural networks[J]. IEEE Signal Processing Letters, 2023, 30: 36−40 doi: 10.1109/LSP.2023.3239737
    [46]
    Jia Hengrui, Choquette-Choo C A, Chandrasekaran V, et al. Entangled watermarks as a defense against model extraction[C]//Proc of the 30th USENIX Security Symp (USENIX Security 21). Berkeley, CA: USENIX Association, 2021: 1937−1954
    [47]
    Bansal A, Chiang P, Curry M J, et al. Certified neural network watermarks with randomized smoothing[C]//Proc of Int Conf on Machine Learning. New York: PMLR, 2022: 1450−1465
    [48]
    Hua Guang, Teoh A B J, Xiang Yong, et al. Unambiguous and high-fidelity backdoor watermarking for deep neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(8): 11204−11217
    [49]
    刘伟发,张光华,杨婷,等. 基于标志网络的深度学习多模型水印方案[J]. 信息安全学报,2022,7(6):105−115

    Liu Weifa, Zhang Guanghua, Yang Ting, et al. Logo network based deep learning multi-model watermarking scheme[J]. Journal of Cyber Security, 2022, 7(6): 105−115 (in Chinese)
    [50]
    Li Wei, Zhang Xiaoyu, Lin Shen, et al. Chameleon DNN watermarking: Dynamically public model ownership verification[C]//Proc of the Int Conf on Information Security Applications. Berlin: Springer, 2022: 344−356
    [51]
    Chen Xiaofeng, Zhang Fangguo, Kim K. Chameleon hashing without key exposure[C]//Proc of the Int Conf on Information Security. Berlin: Springer, 2004: 87−98
    [52]
    Li Yiming, Bai Yang, Jiang Yong, et al. Untargeted backdoor watermark: Towards harmless and stealthy dataset copyright protection[J]. Advances in Neural Information Processing Systems, 2022, 35: 13238−13250
    [53]
    Lee S, Song W, Jana S, et al. Evaluating the robustness of trigger set-based watermarks embedded in deep neural networks[J]. IEEE Transactions on Dependable and Secure Computing, 2023, 20(4): 3434−3448 doi: 10.1109/TDSC.2022.3196790
    [54]
    Cao Xiaoyu, Jia Jinyuan, Gong N Z. IPGuard: Protecting intellectual property of deep neural networks via fingerprinting the classification boundary[C]//Proc of the 2021 ACM Asia Conf on Computer and Communications Security. New York: ACM, 2021: 14−25
    [55]
    Zhao Jingjing, Hu Qingyue, Liu Gaoyang, et al. AFA: Adversarial fingerprinting authentication for deep neural networks[J]. Computer Communications, 2020, 150: 488−497 doi: 10.1016/j.comcom.2019.12.016
    [56]
    Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: A simple way to prevent neural networks from overfitting[J]. The Journal of Machine Learning Research, 2014, 15(1): 1929−1958
    [57]
    Wang Si, Chang C H. Fingerprinting deep neural networks—a deepfool approach[C]//Proc of 2021 IEEE Int Symp on Circuits and Systems (ISCAS). Piscataway, NJ: IEEE, 2021: 1−5
    [58]
    Lukas N, Zhang Yuxuan, Kerschbaum F. Deep neural network fingerprinting by conferrable adversarial examples[J]. arXiv preprint, arXiv: 1912.00888, 2019
    [59]
    Wang Siyue, Wang Xiao, Chen Pinyu, et al. Characteristic examples: High-robustness, low-transferability fingerprinting of neural networks[C]//Proc of the 30th Int Joint Conf on Artificial Intelligence. Montreal, Canada: IJCAI, 2021: 575−582
    [60]
    Peng Zirui, Li Shaofeng, Chen Guoxing, et al. Fingerprinting deep neural networks globally via universal adversarial perturbations[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2022: 13430−13439
    [61]
    Chen Jialuo, Wang Jingyi, Peng Tinglan, et al. Copy, right? A testing framework for copyright protection of deep learning models[C]//Proc of 2022 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2022: 824−841
    [62]
    Maini P, Yaghini M, Papernot N. Dataset inference: Ownership resolution in machine learning[J]. arXiv preprint, arXiv: 2104.10706, 2021
    [63]
    Liu Yunpeng, Li Kexin, Liu Zhuotao, et al. Provenance of training without training data: Towards privacy-preserving DNN model ownership verification[C]//Proc of the ACM Web Conf 2023. New York: ACM, 2023: 1980−1990
    [64]
    Dong Tian, Li Shaofeng, Chen Guoxing, et al. RAI2: Responsible identity audit governing the artificial intelligence[C]//Proc of the 30th Annual Network and Distributed System Security Symp (NDSS 2023). San Diego, California, USA: Internet Soc, 2023: 1−18
    [65]
    Lv Peizhuo, Ma Hualong, Chen Kai, et al. MEA-Defender: A robust watermark against model extraction attack[C]//Proc of 2024 IEEE Symp on Security and Privacy. Piscataway, NJ: IEEE, 2024: 102−102
    [66]
    Fan Lixin, Ng K W, Chan C S. DeepIPR: Deep neural network intellectual property protection with passports[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2021, 44(10): 6122−6139
    [67]
    Zhang Jie, Chen Dongdong, Liao Jing, et al. Passport-aware normalization for deep model protection[J]. Advances in Neural Information Processing Systems, 2020, 33: 22619−22628
    [68]
    Liu Hanwen, Weng Zhenyu, Zhu Yuesheng, et al. Trapdoor normalization with irreversible ownership verification[C]//Proc of the Int Conf on Machine Learning. Honolulu, Hawaii, USA: PMLR, 2023: 22177−22187
    [69]
    Chen Yiming, Tian Jinyu, Chen Xiangyu, et al. Effective ambiguity attack against passport-based DNN intellectual property protection schemes through fully connected layer substitution[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2023: 8123−8132
    [70]
    Gomez L, Wilhelm M, Márquez J, et al. Security for distributed deep neural networks towards data confidentiality & intellectual property protection[J]. arXiv preprint, arXiv: 1907.04246, 2019
    [71]
    Brakerski Z, Gentry C, Vaikuntanathan V. (Leveled) fully homomorphic encryption without bootstrapping[J]. ACM Transactions on Computation Theory, 2014, 6(3): 1−36
    [72]
    Lin Ning, Chen Xiaoming, Lu Hang, et al. Chaotic weights: A novel approach to protect intellectual property of deep neural networks[J]. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2020, 40(7): 1327−1339
    [73]
    Peterson G. Arnold’s cat map[J]. Math Linear Algebra, 1997, 45: 1−7
    [74]
    Xue Mingfu, Wu Zhiyu, Zhang Yushu, et al. AdvParams: An active DNN intellectual property protection technique via adversarial perturbation based parameter encryption[J]. IEEE Transactions on Emerging Topics in Computing, 2023, 11(3): 664−678 doi: 10.1109/TETC.2022.3231012
    [75]
    Zhou Tong, Luo Yukui, Ren Shaolei, et al. NNSplitter: An active defense solution to DNN model via automated weight obfuscation[C]//Proc of Int Conf on Machine Learning. Honolulu, Hawaii, USA: PMLR, 2023: 42614−42624
    [76]
    Pyone A, Maung M, Kiya H. Training DNN model with secret key for model protection[C]//Proc of 2020 IEEE the 9th Global Conf on Consumer Electronics (GCCE). Piscataway, NJ: IEEE, 2020: 818−821
    [77]
    He Zecheng, Zhang Tianwei, Lee R. Sensitive-sample fingerprinting of deep neural networks[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 4729−4737
    [78]
    Xu Guowen, Li Hongwei, Ren Hao, et al. Secure and verifiable inference in deep neural networks[C]//Proc of Annual Computer Security Applications Conf. New York: ACM, 2020: 784−797
    [79]
    Aramoon O, Chen Pinyu, Qu Gang. AID: Attesting the integrity of deep neural networks[C]//Proc of the 58th ACM/IEEE Design Automation Conf. Piscataway, NJ: IEEE, 2021: 19−24
    [80]
    Kuttichira D P, Gupta S, Nguyen D, et al. Verification of integrity of deployed deep learning models using Bayesian optimization[J]. Knowledge-Based Systems, 2022, 241: 108238 doi: 10.1016/j.knosys.2022.108238
    [81]
    Zhu Renjie, Wei Ping, Li Sheng, et al. Fragile neural network watermarking with trigger image set[C]//Proc of the 14th Int Conf on Knowledge Science, Engineering and Management. Berlin: Springer, 2021: 280−293
    [82]
    Lao Yingjie, Zhao Weijie, Yang Peng, et al. Deepauth: A DNN authentication framework by model-unique and fragile signature embedding[C]//Proc of the AAAI Conf on Artificial Intelligence, Palo Alto, CA: AAAI, 2022: 9595−9603
    [83]
    Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[J]. arXiv preprint, arXiv: 1706.06083, 2017
    [84]
    Yin Zhaoxia, Yin Heng, Su Hang, et al. Decision-based iterative fragile watermarking for model integrity verification[J]. arXiv preprint, arXiv: 2305.09684, 2023
    [85]
    林翔,金彪,尤玮婧,等. 基于脆弱指纹的深度神经网络模型完整性验证框架[J/OL]. 计算机应用. [2024-05-03]. http://kns.cnki.net/kcms/detail/51.1307.TP.20240118.1646.002.html

    Lin Xiang, Jin Biao, You Weijing, et al. Model integrity verification framework of deep neural network based on fragile fingerprint[J/OL]. Journal of Computer Applications . [2024-05-03]. http://kns.cnki.net/kcms/detail/51.1307.TP.20240118.1646.002.html (in Chinese)
    [86]
    钱亚冠,何念念,郭艳凯,等. 针对深度神经网络模型指纹检测的逃避算法[J]. 计算机研究与发展,2021,58(5):1106−1117 doi: 10.7544/issn1000-1239.2021.20200903

    Qian Yaguan, He Niannian, Guo Yankai, et al. An evasion algorithm to fool fingerprint detector for deep neural networks[J]. Journal of Computer Research and Development, 2021, 58(5): 1106−1117 (in Chinese) doi: 10.7544/issn1000-1239.2021.20200903
    [87]
    Wang Shuo, Abuadbba S, Agarwal S, et al. PublicCheck: Public integrity verification for services of run-time deep models[C]//Proc of 2023 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2023: 1348−1365
    [88]
    Abuadbba A, Rhodes N, Moore K, et al. DeepiSign-G: Generic watermark to stamp hidden DNN parameters for self-contained tracking[J]. arXiv preprint, arXiv: 2407.01260, 2024.
    [89]
    Guan Xiquan, Feng Huamin, Zhang Weiming, et al. Reversible watermarking in deep convolutional neural networks for integrity authentication[C]//Proc of the 28th ACM Int Conf on Multimedia. New York: ACM, 2020: 2273−2280
    [90]
    Zhao Gejian, Qin Chuan, Yao Heng, et al. DNN self-embedding watermarking: Towards tampering detection and parameter recovery for deep neural network[J]. Pattern Recognition Letters, 2022, 164: 16−22 doi: 10.1016/j.patrec.2022.10.013
    [91]
    Huang Yawen, Zheng Hongying, Xiao Di. Convolutional neural networks tamper detection and location based on fragile watermarking[J]. Applied Intelligence, 2023, 53: 24056−24067 doi: 10.1007/s10489-023-04797-w
    [92]
    Lin Mingbao, Ji Rongrong, Wang Yan, et al. Hrank: Filter pruning using high-rank feature map[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 1529−1538
    [93]
    Lawnik M. Combined logistic and tent map[J]. Journal of Physics: Conference Series, 2018, 1141: 012132
    [94]
    Quan Yuhui, Teng Huan, Chen Yixin, et al. Watermarking deep neural networks in image processing[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(5): 1852−1865
    [95]
    Hou Minghua, Tang Linlin, Qi Shuhan, et al. A robust watermarking method for image processing models[C]//Proc of 2022 4th Int Conf on Data Intelligence and Security (ICDIS). Piscataway, NJ: IEEE, 2022: 75−81
    [96]
    Zhang Jie, Chen Dongdong, Liao Jing, et al. Model watermarking for image processing networks[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2020, 34(7): 12805−12812
    [97]
    Lim J H, Chan C S, Ng K W, et al. Protect, show, attend and tell: Empowering image captioning models with ownership protection[J]. Pattern Recognition, 2022, 122: 108285 doi: 10.1016/j.patcog.2021.108285
    [98]
    Ong D S, Chan C S, Ng K W, et al. Protecting intellectual property of generative adversarial networks from ambiguity attacks[C]//Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 3630−3639
    [99]
    Qiao Tong, Ma Yuyan, Zheng Ning, et al. A novel model watermarking for protecting generative adversarial network[J]. Computers & Security, 2023, 127: 103102
    [100]
    He Xuanli, Xu Qiongkai, Lyu Lingjuan, et al. Protecting intellectual property of language generation APIs with lexical watermark[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2022: 10758−10766
    [101]
    代龙,张静,樊雪峰,等. 基于黑盒水印的NLP神经网络版权保护[J]. 网络与信息安全学报,2023,9(1):140−149 doi: 10.11959/j.issn.2096-109x.2023009

    Dai Long, Zhang Jing, Fan Xuefeng, et al. NLP neural network copyright protection based on black box watermark[J]. Chinese Journal of Network and Information Security, 2023, 9(1): 140−149 (in Chinese) doi: 10.11959/j.issn.2096-109x.2023009
    [102]
    Kirchenbauer J, Geiping J, Wen Yuxin, et al. A watermark for large language models[C]//Proc of Int Conf on Machine Learning. Honolulu, Hawaii, USA: PMLR, 2023: 17061−17084
    [103]
    Chiang W L, Li Zhuohan, Lin Zi, et al. Vicuna: An open-source chatbot impressing GPT−4 with 90%* ChatGPT quality[OL]. [2023-10-25]. https://lmsys.org/blog/2023-03-30-vicuna/
    [104]
    Peng Wenjun, Yi Jingwei, Wu Fangzhao, et al. Are you copying my model? Protecting the copyright of large language models for EaaS via backdoor watermark[J]. arXiv preprint, arXiv: 2305.10036, 2023
    [105]
    Guo Jia, Potkonjak M. Watermarking deep neural networks for embedded systems[C]//Proc of 2018 IEEE/ACM Int Conf on Computer-Aided Design (ICCAD). Piscataway, NJ: IEEE, 2018: 1−8
    [106]
    Clements J, Lao Yingjie. DeepHardMark: Towards watermarking neural network hardware[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2022: 4450−4458
    [107]
    Bulens P, Standaert F X, Quisquater J J. How to stronglylink data and its medium: The paper case[J]. IET Information Security, 2010, 4(3): 125−136 doi: 10.1049/iet-ifs.2009.0032
    [108]
    Mankali L, Rangarajan N, Chatterjee S, et al. Leveraging ferroelectric stochasticity and in-memory computing for DNN IP obfuscation[J]. IEEE Journal on Exploratory Solid-State Computational Devices and Circuits, 2022, 8(2): 102−110 doi: 10.1109/JXCDC.2022.3217043
    [109]
    Chen Haozhe, Zhang Weiming, Liu Kunlin, et al. Speech pattern based black-box model watermarking for automatic speech recognition[C]//Proc of 2022 IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP 2022). Piscataway, NJ: IEEE, 2022: 3059−3063
    [110]
    Chen Haozhe, Zhang Jie, Chen Kejiang, et al. Model access control based on hidden adversarial examples for automatic speech recognition[J]. IEEE Transactions on Artificial Intelligence, 2024, 5(3): 1302−1315 doi: 10.1109/TAI.2023.3285858
    [111]
    Guo Junfeng, Li Yiming, Wang Lixu, et al. Domain watermark: Effective and harmless dataset copyright protection is closed at hand[J]. Advances in Neural Information Processing Systems, 2024, 36: 54421−54450
    [112]
    Liu Tengjun, Chen Ying, Gu Wanxuan. Copyright-certified distillation dataset: Distilling one million coins into one bitcoin with your private key[C]//Proc of the AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2023, 37(5): 6458−6466
    [113]
    Chen Kangjie, Guo Shangwei, Zhang Tianwei, et al. Stealing deep reinforcement learning models for fun and profit[C]//Proc of the 2021 ACM Asia Conf on Computer and Communications Security. New York: ACM, 2021: 307−319
    [114]
    Behzadan V, Hsu W. Sequential triggers for watermarking of deep reinforcement learning policies[J]. arXiv preprint, arXiv: 1906.01126, 2019
    [115]
    Chen Kangjie, Guo Shangwei, Zhang Tianwei, et al. Temporal watermarks for deep reinforcement learning models[C]//Proc of the 20th Int Conf on Autonomous Agents and Multiagent Systems. Virtual Event, United Kingdom, 2021: 314−322
    [116]
    陈瑜霖,姚志强,金彪,等. 一种基于后门技术的深度强化学习水印框架[J]. 福建师范大学学报:自然科学版,2024,40(1):96−105

    Chen Yulin, Yao Zhiqiang, Jin Biao, et al. A deep reinforcement learning watermarking framework based on backdoor technology[J]. Journal of Fujian Normal University (Natural Science Edition), 2024, 40(1): 96−105 (in Chinese)
    [117]
    Konečný J, McMahan H B, Yu F X, et al. Federated learning: Strategies for improving communication efficiency[J]. arXiv preprint, arXiv: 1610.05492, 2016
    [118]
    Li Bowen, Fan Lixin, Gu Hanlin, et al. FedIPR: Ownership verification for federated deep neural network models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(4): 4521−4536
    [119]
    李璇,邓天鹏,熊金波,等. 基于模型后门的联邦学习水印[J]. 软件学报,2024,35(7):3454−3468

    Li Xuan, Deng Tianpeng, Xiong Jinbo, et al. Federated learning watermark based on backdoor[J]. Journal of Software, 2024, 35(7): 3454−3468 (in Chinese)
    [120]
    郭晶晶,刘玖樽,马勇,等. 基于模型水印的联邦学习后门攻击防御方法[J]. 计算机学报,2024,47(3):662−676 doi: 10.11897/SP.J.1016.2024.00662

    Guo Jingjing, Liu Jiuzun, Ma Yong, et al. Backdoor attack defense method for federated learning based on model watermarking[J]. Chinese Journal of Computers, 2024, 47(3): 662−676 (in Chinese) doi: 10.11897/SP.J.1016.2024.00662
    [121]
    Zhou Zan, Xu Changqiao, Wang Mingze, et al. A multi-shuffler framework to establish mutual confidence for secure federated learning[J]. IEEE Transactions on Dependable and Secure Computing, 2023, 20(5): 4230−4244 doi: 10.1109/TDSC.2022.3215574
    [122]
    Shen Yun, He Xinle, Han Yufei, et al. Model stealing attacks against inductive graph neural networks[C]//Proc of 2022 IEEE Symp on Security and Privacy (SP). Piscataway, NJ: IEEE, 2022: 1175−1192
    [123]
    Dziedzic A, Boenisch F, Jiang Mingjian, et al. Sentence embedding encoders are easy to steal but hard to defend[C]//Proc of ICLR 2023 Workshop on Pitfalls of Limited Data and Computation for Trustworthy ML. Kigali, Rwanda: ICLR, 2023: 1−12
    [124]
    Lin L, Wu Hanzhou. Verifying integrity of deep ensemble models by lossless black-box watermarking with sensitive samples[C]//Proc of the 10th Int Symp on Digital Forensics and Security (ISDFS). Piscataway, NJ: IEEE, 2022: 1−6
    [125]
    冯帅,邓伦治. 基于身份的隐私保护数据审计方案[J]. 贵州师范大学学报:自然科学版,2023,41(2):105−112

    Feng Shuai, Deng Lunzhi. Identity-based data auditing scheme with privacy protection[J]. Journal of Guizhou Normal University (Natural Sciences), 2023, 41(2): 105−112 (in Chinese)
    [126]
    熊虎,林烨,姚婷. 支持等式测试及密码逆向防火墙的SM9标识加密方案[J]. 计算机研究与发展,2024,61(4):1070−1084 doi: 10.7544/issn1000-1239.202220809

    Xiong Hu, Lin Ye, Yao Ting. SM9 identity-based encryption scheme with equality test and cryptographic reverse firewalls[J]. Journal of Computer Research and Development, 2024, 61(4): 1070−1084 (in Chinese) doi: 10.7544/issn1000-1239.202220809
  • Related Articles

    [1]Cao Yiran, Zhu Youwen, He Xingyu, Zhang Yue. Utility-Optimized Local Differential Privacy Set-Valued Data Frequency Estimation Mechanism[J]. Journal of Computer Research and Development, 2022, 59(10): 2261-2274. DOI: 10.7544/issn1000-1239.20220504
    [2]Hong Jinxin, Wu Yingjie, Cai Jianping, Sun Lan. Differentially Private High-Dimensional Binary Data Publication via Attribute Segmentation[J]. Journal of Computer Research and Development, 2022, 59(1): 182-196. DOI: 10.7544/issn1000-1239.20200701
    [3]Wu Wanqing, Zhao Yongxin, Wang Qiao, Di Chaofan. A Safe Storage and Release Method of Trajectory Data Satisfying Differential Privacy[J]. Journal of Computer Research and Development, 2021, 58(11): 2430-2443. DOI: 10.7544/issn1000-1239.2021.20210589
    [4]Zhang Yuxuan, Wei Jianghong, Li Ji, Liu Wenfen, Hu Xuexian. Graph Degree Histogram Publication Method with Node-Differential Privacy[J]. Journal of Computer Research and Development, 2019, 56(3): 508-520. DOI: 10.7544/issn1000-1239.2019.20170886
    [5]Zhu Weijun, You Qingguang, Yang Weidong, Zhou Qinglei. Trajectory Privacy Preserving Based on Statistical Differential Privacy[J]. Journal of Computer Research and Development, 2017, 54(12): 2825-2832. DOI: 10.7544/issn1000-1239.2017.20160647
    [6]Wu Yingjie, Zhang Liqun, Kang Jian, Wang Yilei. An Algorithm for Differential Privacy Streaming Data Adaptive Publication[J]. Journal of Computer Research and Development, 2017, 54(12): 2805-2817. DOI: 10.7544/issn1000-1239.2017.20160555
    [7]Wang Liang, Wang Weiping, Meng Dan. Privacy Preserving Data Publishing via Weighted Bayesian Networks[J]. Journal of Computer Research and Development, 2016, 53(10): 2343-2353. DOI: 10.7544/issn1000-1239.2016.20160465
    [8]Lu Guoqing, Zhang Xiaojian, Ding Liping, Li Yanfeng, Liao Xin. Frequent Sequential Pattern Mining under Differential Privacy[J]. Journal of Computer Research and Development, 2015, 52(12): 2789-2801. DOI: 10.7544/issn1000-1239.2015.20140516
    [9]Ouyang Jia, Yin Jian, Liu Shaopeng, Liu Yubao. An Effective Differential Privacy Transaction Data Publication Strategy[J]. Journal of Computer Research and Development, 2014, 51(10): 2195-2205. DOI: 10.7544/issn1000-1239.2014.20130824
    [10]Ni Weiwei, Chen Geng, Chong Zhihong, Wu Yingjie. Privacy-Preserving Data Publication for Clustering[J]. Journal of Computer Research and Development, 2012, 49(5): 1095-1104.
  • Cited by

    Periodical cited type(5)

    1. 张涵,于航,周继威,白云开,赵路坦. 面向隐私计算的可信执行环境综述. 计算机应用. 2025(02): 467-481 .
    2. 付裕,林璟锵,冯登国. 虚拟化与密码技术应用:现状与未来. 密码学报(中英文). 2024(01): 3-21 .
    3. 徐传康,李忠月,刘天宇,种统洪,杨发雪. 基于可信执行环境的汽车域控系统安全研究. 汽车实用技术. 2024(15): 18-25+73 .
    4. 徐文嘉,岑孟杰,陈亮. 隐私保护下单细胞RNA测序数据细胞分类研究. 医学信息学杂志. 2024(10): 86-89 .
    5. 孙钰,熊高剑,刘潇,李燕. 基于可信执行环境的安全推理研究进展. 信息网络安全. 2024(12): 1799-1818 .

    Other cited types(4)

Catalog

    Article views (200) PDF downloads (83) Cited by(9)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return