• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Dong Ye, Hou Wei, Chen Xiaojun, Zeng Shuai. Efficient and Secure Federated Learning Based on Secret Sharing and Gradients Selection[J]. Journal of Computer Research and Development, 2020, 57(10): 2241-2250. DOI: 10.7544/issn1000-1239.2020.20200463
Citation: Dong Ye, Hou Wei, Chen Xiaojun, Zeng Shuai. Efficient and Secure Federated Learning Based on Secret Sharing and Gradients Selection[J]. Journal of Computer Research and Development, 2020, 57(10): 2241-2250. DOI: 10.7544/issn1000-1239.2020.20200463

Efficient and Secure Federated Learning Based on Secret Sharing and Gradients Selection

More Information
  • Published Date: September 30, 2020
  • In recent years, federated learning (FL) has been an emerging collaborative machine learning method where distributed users can train various models by only sharing gradients. To prevent privacy leakages from gradients, secure multi-party computation (MPC) has been considered as a promising guarantee recently. Meanwhile, some researchers proposed the Top-K gradients selection algorithm to reduce the traffic for synchronizing gradients among distributed users. However, there are few works that can balance the advantages of the two areas at present. We combine secret sharing with Top-K gradients selection to design efficient and secure federated learning protocols, so that we can cut down the communication overheads and improve the efficiency during the training phase while guaranteeing the users privacy and data security. Also, we propose an efficient method to construct message authentication code (MAC) to verify the validity of the aggregated results from the servers. And the communication overheads introduced by the MAC is small and independent of the number of shared gradients. Besides, we implement a prototype system. Compared with the plaintext training, on the one hand, our secure techniques introduce small additional overheads in communication and computation; On the other hand, we achieve the same level of accuracy as the plaintext training.
  • Cited by

    Periodical cited type(19)

    1. 包晓丽. 可信数据空间:技术与制度二元共治. 浙江学刊. 2024(01): 89-100+239-240 .
    2. 林宁,张亮. 基于联邦学习的个性化推荐系统研究. 科技创新与生产力. 2024(04): 27-30 .
    3. 李璇,邓天鹏,熊金波,金彪,林劼. 基于模型后门的联邦学习水印. 软件学报. 2024(07): 3454-3468 .
    4. 洪榛,冯王磊,温震宇,吴迪,李涛涛,伍一鸣,王聪,纪守领. 基于梯度回溯的联邦学习搭便车攻击检测. 计算机研究与发展. 2024(09): 2185-2198 . 本站查看
    5. 陈卡. 基于模型分割的联邦学习数据隐私保护方法. 电信科学. 2024(09): 136-145 .
    6. 余晟兴,陈钟. 基于同态加密的高效安全联邦学习聚合框架. 通信学报. 2023(01): 14-28 .
    7. 林莉,张笑盈,沈薇,王万祥. FastProtector:一种支持梯度隐私保护的高效联邦学习方法. 电子与信息学报. 2023(04): 1356-1365 .
    8. 顾育豪,白跃彬. 联邦学习模型安全与隐私研究进展. 软件学报. 2023(06): 2833-2864 .
    9. 郭松岳,王阳谦,柏思远,刘永恒,周骏,王梦鸽,廖清. 面向数据混合分布的联邦自适应交互模型. 计算机研究与发展. 2023(06): 1346-1357 . 本站查看
    10. 陈宛桢,张恩,秦磊勇,洪双喜. 边缘计算下基于区块链的隐私保护联邦学习算法. 计算机应用. 2023(07): 2209-2216 .
    11. 高莹,陈晓峰,张一余,王玮,邓煌昊,段培,陈培炫. 联邦学习系统攻击与防御技术研究综述. 计算机学报. 2023(09): 1781-1805 .
    12. 张连福,谭作文. 一种面向多模态医疗数据的联邦学习隐私保护方法. 计算机科学. 2023(S2): 933-940 .
    13. 周赞,张笑燕,杨树杰,李鸿婧,况晓辉,叶何亮,许长桥. 面向联邦算力网络的隐私计算自适激励机制. 计算机学报. 2023(12): 2705-2725 .
    14. 莫慧凌,郑海峰,高敏,冯心欣. 基于联邦学习的多源异构数据融合算法. 计算机研究与发展. 2022(02): 478-487 . 本站查看
    15. 陈前昕,毕仁万,林劼,金彪,熊金波. 支持多数不规则用户的隐私保护联邦学习框架. 网络与信息安全学报. 2022(01): 139-150 .
    16. 侯坤池,王楠,张可佳,宋蕾,袁琪,苗凤娟. 基于自编码神经网络的半监督联邦学习模型. 计算机应用研究. 2022(04): 1071-1074+1104 .
    17. 詹玉峰,王家盛,夏元清. 面向联邦学习的数据交易机制. 指挥与控制学报. 2022(02): 122-132 .
    18. 肖林声,钱慎一. 基于并行同态加密和STC的高效安全联邦学习. 通信技术. 2021(04): 922-928 .
    19. 刘飚,张方佼,王文鑫,谢康,张健毅. 基于矩阵映射的拜占庭鲁棒联邦学习算法. 计算机研究与发展. 2021(11): 2416-2429 . 本站查看

    Other cited types(43)

Catalog

    Article views (2394) PDF downloads (1277) Cited by(62)
    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return