Communication-Efficient Federated Learning Based on Secret Sharing and Compressed Sensing
-
Graphical Abstract
-
Abstract
The rapid development of deep learning technology has brought us great convenience, but it also results in the disclosure of a large number of private data. Federated learning (FL) allows clients to jointly train models only by sharing gradients, which seems to solve the privacy information leakage problem, but some research show that gradients transmitted in FL frameworks can still result in privacy information leakage. Moreover, the high communication cost of FL is difficult to apply to resource-constrained environments. Therefore, we put forward two communication-efficient and secure FL algorithms, which use Top-K sparse and compressed sensing to reduce communication overhead caused by the gradient transmission, and further use additive secret sharing in secure multi-party computation (MPC) to encrypt the important gradient parameter measurements in order to simultaneously realize communication overhead reduction and security enhancement. The main difference between the two algorithms is that the client and server communicate with each other by transmitting the gradient measurement value and the quantization result of the gradient measurement value, respectively. Experiments on MNIST and Fashion-MNIST data show that, compared with other algorithms, the proposed algorithms can further increase the security at a lower communication cost and have better performance in model accuracy.
-
-