高级检索
    董业, 侯炜, 陈小军, 曾帅. 基于秘密分享和梯度选择的高效安全联邦学习[J]. 计算机研究与发展, 2020, 57(10): 2241-2250. DOI: 10.7544/issn1000-1239.2020.20200463
    引用本文: 董业, 侯炜, 陈小军, 曾帅. 基于秘密分享和梯度选择的高效安全联邦学习[J]. 计算机研究与发展, 2020, 57(10): 2241-2250. DOI: 10.7544/issn1000-1239.2020.20200463
    Dong Ye, Hou Wei, Chen Xiaojun, Zeng Shuai. Efficient and Secure Federated Learning Based on Secret Sharing and Gradients Selection[J]. Journal of Computer Research and Development, 2020, 57(10): 2241-2250. DOI: 10.7544/issn1000-1239.2020.20200463
    Citation: Dong Ye, Hou Wei, Chen Xiaojun, Zeng Shuai. Efficient and Secure Federated Learning Based on Secret Sharing and Gradients Selection[J]. Journal of Computer Research and Development, 2020, 57(10): 2241-2250. DOI: 10.7544/issn1000-1239.2020.20200463

    基于秘密分享和梯度选择的高效安全联邦学习

    Efficient and Secure Federated Learning Based on Secret Sharing and Gradients Selection

    • 摘要: 近年来,联邦学习已经成为一种新兴的协作式机器学习方法.在联邦学习中,分布式用户可以仅通过共享梯度来训练各种模型.但是一些研究表明梯度也会泄露用户的隐私信息,而安全多方计算被认为是一种保护隐私安全的有效工具.另一方面,一些研究人员提出了Top-K梯度选择算法,以减少用户之间同步梯度的通信开销.但是,目前很少有工作可以平衡这2个领域的优势.将秘密共享与Top-K梯度选择相结合,设计了高效且安全的联邦学习协议,以便在保证用户隐私和数据安全的同时,减少通信开销,并提高模型训练效率.此外,提出了一种高效的方法来构造消息验证码,以验证服务器返回的聚合结果的有效性,其中,验证码引入的通信开销与梯度的数量无关.实验结果表明:相比于同样条件下的明文训练,该文的安全技术在通信和计算方面都会引入少量额外的开销,但该方案取得了和明文训练同一水平的模型准确率.

       

      Abstract: In recent years, federated learning (FL) has been an emerging collaborative machine learning method where distributed users can train various models by only sharing gradients. To prevent privacy leakages from gradients, secure multi-party computation (MPC) has been considered as a promising guarantee recently. Meanwhile, some researchers proposed the Top-K gradients selection algorithm to reduce the traffic for synchronizing gradients among distributed users. However, there are few works that can balance the advantages of the two areas at present. We combine secret sharing with Top-K gradients selection to design efficient and secure federated learning protocols, so that we can cut down the communication overheads and improve the efficiency during the training phase while guaranteeing the users privacy and data security. Also, we propose an efficient method to construct message authentication code (MAC) to verify the validity of the aggregated results from the servers. And the communication overheads introduced by the MAC is small and independent of the number of shared gradients. Besides, we implement a prototype system. Compared with the plaintext training, on the one hand, our secure techniques introduce small additional overheads in communication and computation; On the other hand, we achieve the same level of accuracy as the plaintext training.

       

    /

    返回文章
    返回