Abstract:
Diffusion models have gained significant attention in recent years due to their potential in various generation tasks, including image and text generation. However, the widespread use of these models has also raised concerns regarding data privacy, particularly the vulnerability of these models to membership inference attacks (MIA). These attacks aim to determine whether a specific data point was part of the model’s training set, posing significant risks to privacy. This paper provides an overview of the latest developments in privacy protection for diffusion models, with a specific focus on MIAs and their challenges. Existing MIA methods often struggle with balancing the effectiveness of short-term and long-term attacks, and their applicability to diffusion models has not been thoroughly explored. To address these issues, we propose a novel temporal membership inference attack method designed to enhance the attack success rate (ASR) for both short-term and long-term attacks. The proposed method leverages gradient information from noise during short-term attacks and temporal noise patterns to bolster the effectiveness of long-term attacks. Experimental results demonstrate that our method improves the ASR by approximately 5% for short-term attacks and 1% for long-term attacks compared to conventional approaches on common diffusion models. This work contributes to the ongoing efforts to understand and mitigate privacy risks in diffusion model applications.