高级检索
    陈大卫, 付安民, 周纯毅, 陈珍珠. 基于生成式对抗网络的联邦学习后门攻击方案[J]. 计算机研究与发展, 2021, 58(11): 2364-2373. DOI: 10.7544/issn1000-1239.2021.20210659
    引用本文: 陈大卫, 付安民, 周纯毅, 陈珍珠. 基于生成式对抗网络的联邦学习后门攻击方案[J]. 计算机研究与发展, 2021, 58(11): 2364-2373. DOI: 10.7544/issn1000-1239.2021.20210659
    Chen Dawei, Fu Anmin, Zhou Chunyi, Chen Zhenzhu. Federated Learning Backdoor Attack Scheme Based on Generative Adversarial Network[J]. Journal of Computer Research and Development, 2021, 58(11): 2364-2373. DOI: 10.7544/issn1000-1239.2021.20210659
    Citation: Chen Dawei, Fu Anmin, Zhou Chunyi, Chen Zhenzhu. Federated Learning Backdoor Attack Scheme Based on Generative Adversarial Network[J]. Journal of Computer Research and Development, 2021, 58(11): 2364-2373. DOI: 10.7544/issn1000-1239.2021.20210659

    基于生成式对抗网络的联邦学习后门攻击方案

    Federated Learning Backdoor Attack Scheme Based on Generative Adversarial Network

    • 摘要: 联邦学习使用户在数据不出本地的情形下参与协作式的模型训练,降低了用户数据隐私泄露风险,广泛地应用于智慧金融、智慧医疗等领域.但联邦学习对后门攻击表现出固有的脆弱性,攻击者通过上传模型参数植入后门,一旦全局模型识别带有触发器的输入时,会按照攻击者指定的标签进行误分类.因此针对联邦学习提出了一种新型后门攻击方案Bac_GAN,通过结合生成式对抗网络技术将触发器以水印的形式植入干净样本,降低了触发器特征与干净样本特征之间的差异,提升了触发器的隐蔽性,并通过缩放后门模型,避免了参数聚合过程中后门贡献被抵消的问题,使得后门模型在短时间内达到收敛,从而显著提升了后门攻击成功率.此外,论文对触发器生成、水印系数、缩放系数等后门攻击核心要素进行了实验测试,给出了影响后门攻击性能的最佳参数,并在MNIST,CIFAR-10等数据集上验证了Bac_GAN方案的攻击有效性.

       

      Abstract: Federated learning enables users to participate in collaborative model training while keeping their data in local, which ensures the privacy and security of users’ data. It has been widely used in smart finance, smart medical and other fields. However, federated learning shows inherent vulnerability to backdoor attacks, where the attacker implants the backdoor by uploading the model parameters. Once the global model recognizes the input with the trigger, it will misclassify the input as the label specified by the attacker. This paper proposes a new federated learning backdoor attack scheme, Bac_GAN. By combining generative adversarial network, triggers are implanted in clean samples in the form of watermarks, which reduces the discrepancy between trigger features and clean sample features, and enhance the imperceptibility of triggers. By scaling the backdoor model, the problem of offsetting the contribution of the backdoor during parameter aggregation is avoided, so that the backdoor model can converge in a short time, thus significantly increasing the attack success rate. In addition, we conduct experimental tests on the core elements of backdoor attacks, such as trigger generation, watermark coefficient and scaling coefficient, and give the best parameters that affect the performance of backdoor attack. Also, we validate the attack effectiveness of the Bac_GAN scheme on MNIST and CIFAR-10.

       

    /

    返回文章
    返回