高级检索

    基于扩散的时序成员推理攻击方法

    Temporal Membership Inference Attack Method on Diffusion Model

    • 摘要: 扩散模型是一种用于描述信息传播或影响传递过程的数学模型,近年来被越来越多地应用在计算机视觉和自然语言处理等领域的生成式任务中,针对扩散模型的数据隐私攻击与保护也随之得到了广泛关注. 成员推理攻击(membership inference attack,MIA)是一种针对机器学习模型经典攻击,然而,现有的MIA方法实施的深度和广度不足,尤其是在短时攻击和长时攻击的效果平衡上存在挑战. 提出了一种新的面向扩散模型的时序成员推理攻击方法,通过噪声梯度信息保证短时攻击的攻击成功率(attack success rate,ASR),同时采用时序噪声信息来提高长时攻击效果. 实验表明,提出的方法在常见扩散模型上短时攻击ASR提升约5%,长时攻击的ASR提高约1%.

       

      Abstract: Diffusion models have gained significant attention in recent years due to their potential in various generation tasks, including image and text generation. However, the widespread use of these models has also raised concerns regarding data privacy, particularly the vulnerability of these models to membership inference attacks (MIA). These attacks aim to determine whether a specific data point was part of the model’s training set, posing significant risks to privacy. This paper provides an overview of the latest developments in privacy protection for diffusion models, with a specific focus on MIAs and their challenges. Existing MIA methods often struggle with balancing the effectiveness of short-term and long-term attacks, and their applicability to diffusion models has not been thoroughly explored. To address these issues, we propose a novel temporal membership inference attack method designed to enhance the attack success rate (ASR) for both short-term and long-term attacks. The proposed method leverages gradient information from noise during short-term attacks and temporal noise patterns to bolster the effectiveness of long-term attacks. Experimental results demonstrate that our method improves the ASR by approximately 5% for short-term attacks and 1% for long-term attacks compared to conventional approaches on common diffusion models. This work contributes to the ongoing efforts to understand and mitigate privacy risks in diffusion model applications.

       

    /

    返回文章
    返回