ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (11): 2364-2373.doi: 10.7544/issn1000-1239.2021.20210659

所属专题: 2021密码学与网络空间安全治理专题

• 信息安全 • 上一篇    下一篇



  1. 1(南京理工大学计算机科学与工程学院 南京 210094);2(信息安全国家重点实验室(中国科学院信息工程研究所) 北京 100093) (
  • 出版日期: 2021-11-01
  • 基金资助: 

Federated Learning Backdoor Attack Scheme Based on Generative Adversarial Network

Chen Dawei1,2, Fu Anmin1,2, Zhou Chunyi1, Chen Zhenzhu1   

  1. 1(School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing 210094);2(State Key Laboratory of Information Security (Institute of Information Engineering, Chinese Academy of Sciences), Beijing 100093)
  • Online: 2021-11-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (62072239), the Open Foundation of the State Key Laboratory of Information Security of China (2021-MS-07), and the Fundamental Research Funds for the Central Universities (30920021129, 30921013111).

摘要: 联邦学习使用户在数据不出本地的情形下参与协作式的模型训练,降低了用户数据隐私泄露风险,广泛地应用于智慧金融、智慧医疗等领域.但联邦学习对后门攻击表现出固有的脆弱性,攻击者通过上传模型参数植入后门,一旦全局模型识别带有触发器的输入时,会按照攻击者指定的标签进行误分类.因此针对联邦学习提出了一种新型后门攻击方案Bac_GAN,通过结合生成式对抗网络技术将触发器以水印的形式植入干净样本,降低了触发器特征与干净样本特征之间的差异,提升了触发器的隐蔽性,并通过缩放后门模型,避免了参数聚合过程中后门贡献被抵消的问题,使得后门模型在短时间内达到收敛,从而显著提升了后门攻击成功率.此外,论文对触发器生成、水印系数、缩放系数等后门攻击核心要素进行了实验测试,给出了影响后门攻击性能的最佳参数,并在MNIST,CIFAR-10等数据集上验证了Bac_GAN方案的攻击有效性.

关键词: 联邦学习, 生成式对抗网络, 后门攻击, 触发器, 水印

Abstract: Federated learning enables users to participate in collaborative model training while keeping their data in local, which ensures the privacy and security of users’ data. It has been widely used in smart finance, smart medical and other fields. However, federated learning shows inherent vulnerability to backdoor attacks, where the attacker implants the backdoor by uploading the model parameters. Once the global model recognizes the input with the trigger, it will misclassify the input as the label specified by the attacker. This paper proposes a new federated learning backdoor attack scheme, Bac_GAN. By combining generative adversarial network, triggers are implanted in clean samples in the form of watermarks, which reduces the discrepancy between trigger features and clean sample features, and enhance the imperceptibility of triggers. By scaling the backdoor model, the problem of offsetting the contribution of the backdoor during parameter aggregation is avoided, so that the backdoor model can converge in a short time, thus significantly increasing the attack success rate. In addition, we conduct experimental tests on the core elements of backdoor attacks, such as trigger generation, watermark coefficient and scaling coefficient, and give the best parameters that affect the performance of backdoor attack. Also, we validate the attack effectiveness of the Bac_GAN scheme on MNIST and CIFAR-10.

Key words: federated learning, generative adversarial network, backdoor attack, trigger, watermark