Advanced Search
    A Survey of Privacy Attack and Defense Techniques for Split Learning Systems[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440703
    Citation: A Survey of Privacy Attack and Defense Techniques for Split Learning Systems[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440703

    A Survey of Privacy Attack and Defense Techniques for Split Learning Systems

    • Split learning is an emerging distributed learning technique, whose main idea is to split the complete machine learning model and deploy it on the client and server respectively. During the training and inference process of the system, the client's data is kept locally and only the encoded intermediate features are passed to the server, thus protecting the client's data privacy to a certain extent, while alleviating the computational load of the model's end-side operation. As split learning technology widens its application across various domains, various privacy attacks targeting split learning systems have emerged incessantly. Attackers can leverage intermediate information such as intermediate features and gradients from partition layers to reconstruct users' private data or infer their private information, posing a severe threat to data privacy. Currently, academia lacks a systematic and comprehensive overview of research achievements in split learning, with some studies confusing it with federated learning technology or offering insufficiently detailed summaries. Therefore, this paper aims to fill this gap by comprehensively summarizing relevant attack and defense techniques in split learning, providing guidance for subsequent research and development. Firstly, we introduce the definition of split learning technology, its training and inference processes, and outline its various extended architectures. Subsequently, we analyze the threat model of split learning systems and summarizes the fundamental concepts, implementation stages, and existing schemes of reconstruction attacks, as well as inference attacks such as attribute inference, membership inference, and label inference targeting split learning systems. Furthermore, we summarize corresponding defense techniques, encompassing methods like anomaly detection, regularization defense, noise addition, adversarial representation training, and feature pruning. Finally, we discuss research challenges and future directions in addressing privacy and security issues in split learning.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return