• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Zhou Chunyi, Chen Dawei, Wang Shang, Fu Anmin, Gao Yansong. Research and Challenge of Distributed Deep Learning Privacy and Security Attack[J]. Journal of Computer Research and Development, 2021, 58(5): 927-943. DOI: 10.7544/issn1000-1239.2021.20200966
Citation: Zhou Chunyi, Chen Dawei, Wang Shang, Fu Anmin, Gao Yansong. Research and Challenge of Distributed Deep Learning Privacy and Security Attack[J]. Journal of Computer Research and Development, 2021, 58(5): 927-943. DOI: 10.7544/issn1000-1239.2021.20200966

Research and Challenge of Distributed Deep Learning Privacy and Security Attack

Funds: This work was supported by the National Natural Science Foundation of China (62072239, 62002167), the Guangxi Key Laboratory of Trusted Software (KX202029), and the Fundamental Research Funds for the Central Universities (30920021129).
More Information
  • Published Date: April 30, 2021
  • Different from the centralized deep learning mode, distributed deep learning gets rid of the limitation that the data must be centralized during the model training process, which realizes the local operation of the data, and allows all participants to collaborate without exchanging data. It significantly reduces the risk of user privacy leakage, breaks the data island from the technical level, and improves the efficiency of deep learning. Distributed deep learning can be widely used in smart medical care, smart finance, smart retail and smart transportation. However, typical attacks such as generative adversarial network attacks, membership inference attacks and backdoor attacks, have revealed that distributed deep learning still has serious privacy vulnerabilities and security threats. This paper first compares and analyzes the characteristics of the three distributed deep learning modes and their core problems, including collaborative learning, federated learning and split learning. Secondly, from the perspective of privacy attacks, it comprehensively expounds various types of privacy attacks faced by distributed deep learning, and summarizes the existing privacy attack defense methods. At the same time, from the perspective of security attacks, the paper analyzes the attack process and inherent security threats of the three security attacks: data poisoning attacks, adversarial sample attacks, and backdoor attacks, and analyzes the existing security attack defense technology from the perspectives of defense principles, adversary capabilities, and defense effects. Finally, from the perspective of privacy and security attacks, the future research directions of distributed deep learning are discussed and prospected.
  • Related Articles

    [1]Xie Guo, Zhang Huaiwen, Wang Le, Liao Qing, Zhang Aoqian, Zhou Zhili, Ge Huilin, Wang Zhiheng, Wu Guozheng. Acceptance and Funding Status of Artificial Intelligence Discipline Projects Under the National Natural Science Foundation of China in 2024[J]. Journal of Computer Research and Development, 2025, 62(3): 648-661. DOI: 10.7544/issn1000-1239.202550008
    [2]Li Xu, Zhu Rui, Chen Xiaolei, Wu Jinxuan, Zheng Yi, Lai Chenghang, Liang Yuxuan, Li Bin, Xue Xiangyang. A Survey of Hallucinations in Large Vision-Language Models: Causes, Evaluations and Mitigations[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440444
    [3]Chen Xuanting, Ye Junjie, Zu Can, Xu Nuo, Gui Tao, Zhang Qi. Robustness of GPT Large Language Models on Natural Language Processing Tasks[J]. Journal of Computer Research and Development, 2024, 61(5): 1128-1142. DOI: 10.7544/issn1000-1239.202330801
    [4]Zhang Mi, Pan Xudong, Yang Min. JADE-DB:A Universal Testing Benchmark for Large Language Model Safety Based on Targeted Mutation[J]. Journal of Computer Research and Development, 2024, 61(5): 1113-1127. DOI: 10.7544/issn1000-1239.202330959
    [5]Shu Wentao, Li Ruixiao, Sun Tianxiang, Huang Xuanjing, Qiu Xipeng. Large Language Models: Principles, Implementation, and Progress[J]. Journal of Computer Research and Development, 2024, 61(2): 351-361. DOI: 10.7544/issn1000-1239.202330303
    [6]Yang Yi, Li Ying, Chen Kai. Vulnerability Detection Methods Based on Natural Language Processing[J]. Journal of Computer Research and Development, 2022, 59(12): 2649-2666. DOI: 10.7544/issn1000-1239.20210627
    [7]Pan Xudong, Zhang Mi, Yang Min. Fishing Leakage of Deep Learning Training Data via Neuron Activation Pattern Manipulation[J]. Journal of Computer Research and Development, 2022, 59(10): 2323-2337. DOI: 10.7544/issn1000-1239.20220498
    [8]Pan Xuan, Xu Sihan, Cai Xiangrui, Wen Yanlong, Yuan Xiaojie. Survey on Deep Learning Based Natural Language Interface to Database[J]. Journal of Computer Research and Development, 2021, 58(9): 1925-1950. DOI: 10.7544/issn1000-1239.2021.20200209
    [9]Zheng Haibin, Chen Jinyin, Zhang Yan, Zhang Xuhong, Ge Chunpeng, Liu Zhe, Ouyang Yike, Ji Shouling. Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Language Processing[J]. Journal of Computer Research and Development, 2021, 58(8): 1727-1750. DOI: 10.7544/issn1000-1239.2021.20210304
    [10]Wang Ye, Chen Junwu, Xia Xin, Jiang Bo. Intelligent Requirements Elicitation and Modeling: A Literature Review[J]. Journal of Computer Research and Development, 2021, 58(4): 683-705. DOI: 10.7544/issn1000-1239.2021.20200740

Catalog

    Article views (1691) PDF downloads (1461) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return