• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Zhang Xiaojian, Zhang Leilei, Zhang Zhizheng. Federated Learning Method Under User-Level Local Differential Privacy[J]. Journal of Computer Research and Development, 2025, 62(2): 472-487. DOI: 10.7544/issn1000-1239.202330167
Citation: Zhang Xiaojian, Zhang Leilei, Zhang Zhizheng. Federated Learning Method Under User-Level Local Differential Privacy[J]. Journal of Computer Research and Development, 2025, 62(2): 472-487. DOI: 10.7544/issn1000-1239.202330167

Federated Learning Method Under User-Level Local Differential Privacy

Funds: This work was supported by the National Natural Science Foundation of China (62072156, 61502146, 91646203, 91746115) and the Basic Research Special Projects of Key Research Projects in Higher Education Institutes in Henan Province (25ZX012).
More Information
  • Author Bio:

    Zhang Xiaojian: born in 1980. PhD, professor, master supervisor. His main research interests include differential privacy, data mining, and graph data management

    Zhang Leilei: born in 1997. Master candidate. His main research interests include differential privacy and federated learning

    Zhang Zhizheng: born in 1996. Master candidate. His main research interests include differential privacy and federated analytics

  • Received Date: March 16, 2023
  • Revised Date: January 07, 2024
  • Accepted Date: March 05, 2024
  • Available Online: March 06, 2024
  • Federated learning with user-level local differential privacy (ULDP) has attracted considerable research attention in recent years. The trade-off among federated data types, the mechanism of clipping local updates, the allocation of privacy budget, and user dropout directly constrain the accuracy of the global learning model. All existing federated learning methods are vulnerable to handling these problems. To remedy the deficiency caused by the current methods, we employ ULDP to propose an efficient algorithm, called ULDP-FED, to achieve global federated optimization. ULDP-FED can simultaneously handle IID and non-IID federated data types. Compared with those methods with fixed clipping thresholds, ULDP-FED uses a threshold dynamic decay strategy to balance the noise error caused by the Gauss mechanism and the bias caused by update clipping. To allocate the privacy budget of each user carefully, in each round, ULDP-FED relies on the similarity to replace the current local update with the historical noise updates. If the historical updates are obtainted, the user only sends the index of the historical update to the server, which can reduce the communication cost. ULDP-FED is compared with existing methods over MNIST and CIFAR 10 datasets. The experimental results show that our algorithm outperforms its competitors, and achieves the accurate results of federated learning.

  • [1]
    Kairouz P, McMahan H B, Avent B, et al. Advances and open problems in federated learning[J]. Foundations and Trends® in Machine Learning, 2021, 14(1/2): 1−210
    [2]
    Fang Minghong, Cao Xiaoyu, Jia Jinyuan, et al. Local model poisoning attacks to Byzantine-robust federated learning[C]//Proc of the 29th USENIX Security Symp (S&P 2020). Berkeley, CA: USENIX Association, 2020: 1623−1640
    [3]
    Cao Di, Chang Shan, Lin Zhijian, et al. Understanding distributed poisoning attack in federated learning[C]//Proc of the 25th Int Conf on Parallel and Distributed Systems (ICPADS 2019). Piscataway, NJ: IEEE, 2019: 233−239
    [4]
    So J, Güler B, Avestimehr A S. Byzantine-resilient secure federated learning[J]. IEEE Journal on Selected Areas in Communications, 2020, 39(7): 2168−2181
    [5]
    Zhu Ligeng, Liu Zhijian, Han Song. Deep leakage from gradients[C]//Proc of the 33rd Neural Information Processing Systems (NIPS 2019). Cambridge, MA: MIT, 2019: 14774–14784
    [6]
    Zhao Bao, Mopuri K R, Bilen H. iDLG: Improved deep leakage from gradients[J]. arXiv preprint, arXiv: 2001.02610, 2020
    [7]
    Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning[C]//Proc of the 40th Symp on Security and Privacy (SP 2019). Piscataway, NJ: IEEE, 2019: 739−753
    [8]
    Liu Ruixuan, Cao Yang, Chen Hong, et al. Flame: Differentially private federated learning in the shuffle model[C]//Proc of the 35th Association for the Advance of Artificial Intelligence (AAAI 2021). Palo Alto, CA: AAAI, 2021: 8688−8696
    [9]
    Sun Lichao, Qian Jianwei, Chen Xun, et al. Ldp-FL: Practical private aggregation in federated learning with local differential privacy[C]//Proc of the 30th Int Joint Conf on Artificial Intelligence (IJCAI 2021). San Francisco, CA: Morgan Kaufmann, 2021: 1571−1578
    [10]
    McMahan H B, Ramage D, Talwar K, et al. Learning differentially private recurrent language models[J]. arXiv preprint, arXiv: 1710.06963, 2017
    [11]
    Geyer R C, Klein T, Nabi M. Differentially private federated learning: A client level perspective[J]. arXiv preprint, arXiv: 1712.07557, 2017
    [12]
    Wei Kang, Li Jun, Ding Ming, et al. User-level privacy-preserving federated learning: Analysis and performance optimization[J]. IEEE Transactions on Mobile Computing, 2021, 21(9): 3388−3401
    [13]
    Wei Kang, Li Jun, Ding Ming, et al. Federated learning with differential privacy: Algorithms and performance analysis[J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 3454−3469

    Wei Kang,Li Jun,Ding Ming,et al. Federated learning with differential privacy:Algorithms and performance analysis[J]. IEEE Transactions on Information Forensics and Security,2020,15:3454−3469
    [14]
    Andrew G, Thakkar O, McMahan B, et al. Differentially private learning with adaptive clipping[C]//Proc of the 35th Neural Information Processing Systems (NIPS 2021). Cambridge, MA: MIT, 2021: 17455−17466
    [15]
    Wang Lun, Jia Ruoxi, Song Dawn. D2P-Fed: Differentially private federated learning with efficient communication[J]. arXiv preprint, arXiv: 2006.13039, 2020
    [16]
    Shi Yi, Wei Kang, Li Shen, et al. Towards the flatter landscape and better generalization in federated learning under client-level differential privacy[J]. arXiv preprint, arXiv: 2305.00873, 2023
    [17]
    Shi Yi, Liu Yingqi, Wei Kang, et al. Make landscape flatter in differentially private federated learning[J]. arXiv preprint, arXiv: 2303.11242, 2023
    [18]
    Cheng Anda, Wang Peisong, Jian Cheng et al. Differentially private federated learning with local regularization and sparsification[C]//Proc of the 36th Computer Vision and Pattern Recognition (CVPR 2022). Los Alamitos, CA: IEEE Computer Society, 2022: 10112−10121
    [19]
    Abadi M, Chu Andy, Goodfellow I, et al. Deep learning with differential privacy[C]//Proc of the 2016 ACM SIGSAC Computer and Communications Security(CCS 2016). New York: ACM, 2016: 308−318

    Abadi M,Chu Andy,Goodfellow I,et al. Deep learning with differential privacy[C]//Proc of the 2016 ACM SIGSAC Computer and Communications Security(CCS 2016). New York:ACM,2016:308−318
    [20]
    Frank M. Privacy integrated queries: An extensible platform for privacy-preserving data analysis[C]//Proc of the 53rd ACM SIGMOD Int Conf on Management of Data (SIGMOD 2009). New York: ACM, 2009: 19–30
    [21]
    Dwork C, Roth A. The algorithmic foundations of differential privacy[J]. Foundations and Trends in Theoretical Computer Science, 2014, 9(3/4): 211−407
    [22]
    Wang Luping, Wang Wei, Li Bo. CMFL: Mitigating communication overhead for federated learning[C]//Proc of the 39th Int Conf on Distributed Computing Systems (ICDCS 2019). Piscataway, NJ: IEEE, 2019: 954−964
    [23]
    Zhang Xinwei, Chen Xiangyi, Hong Mingyi, et al. Understanding clipping for federated learning: Convergence and client-level differential privacy[C]//Proc of the 19th Int Conf on Machine Learning (ICML 2022). New York: ACM, 2022: 26048−26067
    [24]
    McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Proc of the 20th Artificial Intelligence and Statistics (AISTATS 2017). Cambridge, MA: MIT, 2017: 1273−1284
  • Related Articles

    [1]Lai Baoqiang, Li Zheng, Zhao Ruilian, Guo Junxia. Context-Aware Based API Recommendation with Diversity[J]. Journal of Computer Research and Development, 2023, 60(10): 2335-2347. DOI: 10.7544/issn1000-1239.202220317
    [2]Tang Dan, Cai Hongliang, Geng Wei. Decoding Method of Reed-Solomon Erasure Codes[J]. Journal of Computer Research and Development, 2022, 59(3): 582-596. DOI: 10.7544/issn1000-1239.20210575
    [3]Zhang Bing, Wen Zheng, Wei Xiaoyu, Ren Jiadong. InterDroid: An Interpretable Android Malware Detection Method for Conceptual Drift[J]. Journal of Computer Research and Development, 2021, 58(11): 2456-2474. DOI: 10.7544/issn1000-1239.2021.20210560
    [4]Yang Wang, Gao Mingzhe, Jiang Ting. A Malicious Code Static Detection Framework Based on Multi-Feature Ensemble Learning[J]. Journal of Computer Research and Development, 2021, 58(5): 1021-1034. DOI: 10.7544/issn1000-1239.2021.20200912
    [5]Guo Jinyang, Shao Chuanming, Wang Jing, Li Chao, Zhu Haojin, Guo Minyi. Programming and Developing Environment for FPGA Graph Processing: Survey and Exploration[J]. Journal of Computer Research and Development, 2020, 57(6): 1164-1178. DOI: 10.7544/issn1000-1239.2020.20200106
    [6]Zheng Zhen, Zhai Jidong, Li Yan, Chen Wenguang. Workload Analysis for Typical GPU Programs Using CUPTI Interface[J]. Journal of Computer Research and Development, 2016, 53(6): 1249-1262. DOI: 10.7544/issn1000-1239.2016.20148354
    [7]Jiao Sibei, Ying Lingyun, Yang Yi, Cheng Yao, Su Purui, and Feng Dengguo. An Anti-Obfuscation Method for Detecting Similarity Among Android Applications in Large Scale[J]. Journal of Computer Research and Development, 2014, 51(7): 1446-1457.
    [8]Dong Longming, Wang Ji, Chen Liqian, Dong Wei. Memory Leak Detection for Heap-Manipulating Programs Based on Local Heap Abstraction[J]. Journal of Computer Research and Development, 2012, 49(9): 1832-1842.
    [9]Ma Peijun, Wang Tiantian, and Su Xiaohong. Automatic Grading of Student Programs Based on Program Understanding[J]. Journal of Computer Research and Development, 2009, 46(7): 1136-1142.
    [10]Wang Zhaofei and Huang Chun. Static Detection of Deadlocks in OpenMP Fortran Programs[J]. Journal of Computer Research and Development, 2007, 44(3).
  • Cited by

    Periodical cited type(1)

    1. 郭龙,梁灿,李彦丽. 知识库中标注词句序列命名实体识别方法. 计算机仿真. 2024(11): 512-516 .

    Other cited types(2)

Catalog

    Article views (337) PDF downloads (123) Cited by(3)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return