Abstract:
Constructing an excellent deep neural network (DNN) model requires a large amount of training data, high-performance equipment, and profound expertise and knowledge. Therefore, DNN models should be regarded as the intellectual property (IP) of their owners. Protecting the IP of a DNN model also underscores the appreciation for the value of the data elements integral to its development and training process. However, DNN models are vulnerable to attacks such as theft, tampering, and illegal dissemination by malicious users. The quest for effective strategies to protect their IP has emerged as a pivotal area of academic research and an urgent challenge confronting the industry. Unlike existing related reviews, we focus on the application scenarios of DNN model watermarking. We mainly review the methodologies for DNN model IP protection based on watermarking technology from two dimensions: robust model watermarking for model copyright declaration and fragile model watermarking for model integrity verification. We discuss their characteristics, advantages, and limitations. Additionally, we elaborate on the practical application of DNN model watermarking technology. Finally, by summarizing the common technologies of various methods, we prognosticate future research directions for DNN model IP protection.