高级检索
    刘飚, 张方佼, 王文鑫, 谢康, 张健毅. 基于矩阵映射的拜占庭鲁棒联邦学习算法[J]. 计算机研究与发展, 2021, 58(11): 2416-2429. DOI: 10.7544/issn1000-1239.2021.20210633
    引用本文: 刘飚, 张方佼, 王文鑫, 谢康, 张健毅. 基于矩阵映射的拜占庭鲁棒联邦学习算法[J]. 计算机研究与发展, 2021, 58(11): 2416-2429. DOI: 10.7544/issn1000-1239.2021.20210633
    Liu Biao, Zhang Fangjiao, Wang Wenxin, Xie Kang, Zhang Jianyi. A Byzantine-Robust Federated Learning Algorithm Based on Matrix Mapping[J]. Journal of Computer Research and Development, 2021, 58(11): 2416-2429. DOI: 10.7544/issn1000-1239.2021.20210633
    Citation: Liu Biao, Zhang Fangjiao, Wang Wenxin, Xie Kang, Zhang Jianyi. A Byzantine-Robust Federated Learning Algorithm Based on Matrix Mapping[J]. Journal of Computer Research and Development, 2021, 58(11): 2416-2429. DOI: 10.7544/issn1000-1239.2021.20210633

    基于矩阵映射的拜占庭鲁棒联邦学习算法

    A Byzantine-Robust Federated Learning Algorithm Based on Matrix Mapping

    • 摘要: 联邦学习(federated learning)由于参数服务器端只收集客户端模型而不接触客户端本地数据,从而更好地保护数据隐私.然而其基础聚合算法FedAvg容易受到拜占庭客户端攻击.针对此问题,很多研究提出了不同聚合算法,但这些聚合算法存在防守能力不足、模型假设不贴合实际等问题.因此,提出一种新型的拜占庭鲁棒聚合算法.与现有聚合算法不同,该算法侧重于检测Softmax层的概率分布.具体地,参数服务器在收集客户端模型之后,通过构造的矩阵去映射模型的更新部分来获取此模型的Softmax层概率分布,排除分布异常的客户端模型.实验结果表明:在不降低FedAvg精度的前提下,在阻碍收敛攻击中,将拜占庭容忍率从40%提高到45%,在后门攻击中实现对边缘后门攻击的防守.此外,根据目前最先进的自适应攻击框架,设计出专门针对该聚合算法的自适应攻击,并进行了实验评估,实验结果显示,该聚合算法可以防御至少30%的拜占庭客户端.

       

      Abstract: Federated learning can better protect data privacy because the parameter server only collects the client model and does not touch the local data of the client. However, its basic aggregation algorithm FedAvg is vulnerable to Byzantine client attacks. In response to this problem, many studies have proposed different aggregation algorithms, but these aggregation algorithms have insufficient defensive capabilities, and the model assumptions do not fit the reality. Therefore, we propose a new type of Byzantine robust aggregation algorithm. Different from the existing aggregation algorithms, our algorithm focuses on detecting the probability distribution of the Softmax layer. Specifically, after collecting the client model, the parameter server obtains the Softmax layer probability distribution of the model through the generated matrix to map the updated part of the model, and eliminates the client model with abnormal distribution. The experimental results show that without reducing the accuracy of FedAvg, the Byzantine tolerance rate is increased from 40% to 45% in convergence prevention attacks, and the defense against edge-case backdoor attacks is realized in backdoor attacks. In addition, according to the current state-of-the-art adaptive attack framework, an adaptive attack is designed specifically for our algorithm, and experimental evaluations have been carried out. The experimental results show that our aggregation algorithm can defend at least 30% of Byzantine clients.

       

    /

    返回文章
    返回