• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Liu Junxu, Meng Xiaofeng. Survey on Privacy-Preserving Machine Learning[J]. Journal of Computer Research and Development, 2020, 57(2): 346-362. DOI: 10.7544/issn1000-1239.2020.20190455
Citation: Liu Junxu, Meng Xiaofeng. Survey on Privacy-Preserving Machine Learning[J]. Journal of Computer Research and Development, 2020, 57(2): 346-362. DOI: 10.7544/issn1000-1239.2020.20190455

Survey on Privacy-Preserving Machine Learning

Funds: This work was supported by the National Natural Science Foundation of China (91646203, 61532010, 91846204, 61532016, 61762082) and the National Key Research and Development Program of China (2016YFB1000602, 2016YFB1000603).
More Information
  • Published Date: January 31, 2020
  • Large-scale data collection has vastly improved the performance of machine learning, and achieved a win-win situation for both economic and social benefits, while personal privacy preservation is facing new and greater risks and crises. In this paper, we summarize the privacy issues in machine learning and the existing work on privacy-preserving machine learning. We respectively discuss two settings of the model training process—centralized learning and federated learning. The former needs to collect all the user data before training. Although this setting is easy to deploy, it still exists enormous privacy and security hidden troubles. The latter achieves that massive devices can collaborate to train a global model while keeping their data in local. As it is currently in the early stage of the study, it also has many problems to be solved. The existing work on privacy-preserving techniques can be concluded into two main clues—the encryption method including homomorphic encryption and secure multi-party computing and the perturbation method represented by differential privacy, each having its advantages and disadvantages. In this paper, we first focus on the design of differentially-private machine learning algorithm, especially under centralized setting, and discuss the differences between traditional machine learning models and deep learning models. Then, we summarize the problems existing in the current federated learning study. Finally, we propose the main challenges in the future work and point out the connection among privacy protection, model interpretation and data transparency.
  • Related Articles

    [1]Du Jinming, Sun Yuanyuan, Lin Hongfei, Yang Liang. Conversational Emotion Recognition Incorporating Knowledge Graph and Curriculum Learning[J]. Journal of Computer Research and Development, 2024, 61(5): 1299-1309. DOI: 10.7544/issn1000-1239.202220951
    [2]Liu Xinghong, Zhou Yi, Zhou Tao, Qin Jie. Self-Paced Learning for Open-Set Domain Adaptation[J]. Journal of Computer Research and Development, 2023, 60(8): 1711-1726. DOI: 10.7544/issn1000-1239.202330210
    [3]Wen Yimin, Yuan Zhe, Yu Hang. A New Semi-Supervised Inductive Transfer Learning Framework: Co-Transfer[J]. Journal of Computer Research and Development, 2023, 60(7): 1603-1614. DOI: 10.7544/issn1000-1239.202220232
    [4]Chen Zhenzhu, Zhou Chunyi, Su Mang, Gao Yansong, Fu Anmin. Research Progress of Secure Outsourced Computing for Machine Learning[J]. Journal of Computer Research and Development, 2023, 60(7): 1450-1466. DOI: 10.7544/issn1000-1239.202220767
    [5]Lu Shaoshuai, Chen Long, Lu Guangyue, Guan Ziyu, Xie Fei. Weakly-Supervised Contrastive Learning Framework for Few-Shot Sentiment Classification Tasks[J]. Journal of Computer Research and Development, 2022, 59(9): 2003-2014. DOI: 10.7544/issn1000-1239.20210699
    [6]Zhuo Junbao, Su Chi, Wang Shuhui, Huang Qingming. Min-Entropy Transfer Adversarial Hashing[J]. Journal of Computer Research and Development, 2020, 57(4): 888-896. DOI: 10.7544/issn1000-1239.2020.20190476
    [7]Feng Wei, Hang Wenlong, Liang Shuang, Liu Xuejun, Wang Hui. Deep Stack Least Square Classifier with Inter-Layer Model Knowledge Transfer[J]. Journal of Computer Research and Development, 2019, 56(12): 2589-2599. DOI: 10.7544/issn1000-1239.2019.20180741
    [8]Wen Yimin, Tang Shiqi, Feng Chao, Gao Kai. Online Transfer Learning for Mining Recurring Concept in Data Stream Classification[J]. Journal of Computer Research and Development, 2016, 53(8): 1781-1791. DOI: 10.7544/issn1000-1239.2016.20160223
    [9]Hong Jiaming, Yin Jian, Huang Yun, Liu Yubao, and Wang Jiahai. TrSVM: A Transfer Learning Algorithm Using Domain Similarity[J]. Journal of Computer Research and Development, 2011, 48(10): 1823-1830.
    [10]Mei Canhua, Zhang Yuhong, Hu Xuegang, and Li Peipei. A Weighted Algorithm of Inductive Transfer Learning Based on Maximum Entropy Model[J]. Journal of Computer Research and Development, 2011, 48(9): 1722-1728.

Catalog

    Article views (6250) PDF downloads (5942) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return