Secure Multiparty Computation and Application in Machine Learning
-
Graphical Abstract
-
Abstract
With the emergence and development of artificial intelligence and big data, large-scale data collection and analysis applications have been widely deployed, which introduces the concern of privacy leakage. This privacy concern further prevents data exchanges among originations and results in “data silos”. Secure multiparty computation (MPC) allows multiple originations to perform privacy-preserving collaborative data analytics, without leaking any plaintext data during the interactions, making the data “usable but not visible”. MPC technologies have been extensively studied in the academic and engineering fields, and derive various technical branches. Privacy-preserving machine learning (PPML) is becoming a typical and widely deployed application of MPC. And various PPML schemes have been proposed to perform privacy-preserving training and inference without leaking model parameters nor sensitive data. In this paper, we systematically analyze various MPC schemes and their applications in PPML. Firstly, we list various security models and objectives, and the development of MPC primitives (i.e., garble circuit, oblivious transfer, secret sharing and homomorphic encryption). Then, we summarize the strengths and weaknesses of these primitives, and list the corresponding appropriate usage scenarios, which is followed by the thorough analysis of their applications in PPML. Finally, we point out the further research direction on MPC and their applications in PPML.
-
-