Abstract:
Federated learning with user-level local differential privacy (ULDP) has attracted considerable research attention in recent years. The trade-off among federated data types, the mechanism of clipping local updates, the allocation of privacy budget, and user dropout directly constrain the accuracy of the global learning model. To remedy the deficiency caused by the current methods, this paper employs ULDP to propose an efficient algorithm, called ULDP-FED, to achieve global federated optimization. ULDP-FED can simultaneously handle IID and Non-IID federated data types. Compared to those methods with fixed clipping thresholds, ULDP-FED uses a threshold dynamic decay strategy to balance the noise error caused by the Gauss mechanism and the bias caused by update clipping. To save the privacy budget of each user, in each round, ULDP-FED relies on the similarity to replace the current local update with the historical noise updates. And then the user only sends the index of the gotten historical update to the server, which can reduce the communication cost. ULDP-FED is compared with existing methods over MNIST and CIFAR10 datasets. The experimental results show that our algorithm outperforms its competitors, and achieves the accurate results of federated learning.