Abstract:
Privacy auditing is a crucial issue of data governance, aiming to detect whether data privacy has been protected effectively. Typically, scholars would protect personal private data to meet differential privacy guarantees by perturbing data or adding noise to them. Especially in scenarios of machine learning, an increasing number of differential privacy algorithms have emerged, claiming a relatively stringent level of privacy protection. Although rigorous mathematical proofs of privacy have been conducted before the algorithms’ release, the actual effect on privacy in practice is hardly assured. Due to the complexity of the theory of differential privacy, the correctness of their proofs may not have been thoroughly examined, and imperceptible errors may occur during programming. All of these can undermine the extent of privacy protection to the claimed degree, leaking additional privacy. To tackle this issue, privacy auditing for differential privacy algorithms has emerged. This technique aims to obtain the actual degree of privacy-preserving of differential privacy algorithms, facilitating the discovery of mistakes and improving existing differential privacy algorithms. This paper surveys the scenarios and methods of privacy auditing, summarizing the methods from three aspects―data construction, data measurement, and result quantification, and evaluating them through experiments. Finally, this work presents the challenges of privacy auditing and its future direction.