Abstract:
Federated learning with user-level local differential privacy (ULDP) has attracted considerable research attention in recent years. The trade-off among federated data types, the mechanism of clipping local updates, the allocation of privacy budget, and user dropout directly constrain the accuracy of the global learning model. All existing federated learning methods are vulnerable to handling these problems. To remedy the deficiency caused by the current methods, we employ ULDP to propose an efficient algorithm, called ULDP-FED, to achieve global federated optimization. ULDP-FED can simultaneously handle IID and non-IID federated data types. Compared with those methods with fixed clipping thresholds, ULDP-FED uses a threshold dynamic decay strategy to balance the noise error caused by the Gauss mechanism and the bias caused by update clipping. To allocate the privacy budget of each user carefully, in each round, ULDP-FED relies on the similarity to replace the current local update with the historical noise updates. If the historical updates are obtainted, the user only sends the index of the historical update to the server, which can reduce the communication cost. ULDP-FED is compared with existing methods over MNIST and CIFAR 10 datasets. The experimental results show that our algorithm outperforms its competitors, and achieves the accurate results of federated learning.