高级检索

    基于用户级别本地化差分隐私的联邦学习方法

    Federated Learning under User-Level Local Differential Privacy

    • 摘要: 基于用户级别本地化差分隐私的联邦学习得到了研究者的广泛关注,联邦数据的类型、本地更新的裁剪、隐私预算的分配以及用户掉线问题直接制约着全局联邦学习模型的精度。针对现有方法难以有效应对该类问题的不足,提出了一种有效基于用户级别本地化差分隐私的联邦学习算法ULDP-FED。该算法能够同时处理同分布与非同分布的联邦数据类型;不同于现有的固定裁剪设置方法,ULDP-FED算法采用裁剪阈值动态衰减策略实现高斯机制造成的误差与裁剪造成的偏差之间的均衡;为了节省用户手中的隐私预算,该算法通过遍历用户的所有历史本地噪音更新,来寻找当前轮本地更新的替代更新。若存在高度相似的历史更新,用户只需要上传该历史更新的索引即可,进而减少了用户与服务器之间的通信代价。ULDP-FED算法与现有算法在MNIST和CIFAR10数据上实验结果表明,其模型精度均高于同类方法。

       

      Abstract: Federated learning with user-level local differential privacy (ULDP) has attracted considerable research attention in recent years. The trade-off among federated data types, the mechanism of clipping local updates, the allocation of privacy budget, and user dropout directly constrain the accuracy of the global learning model. To remedy the deficiency caused by the current methods, this paper employs ULDP to propose an efficient algorithm, called ULDP-FED, to achieve global federated optimization. ULDP-FED can simultaneously handle IID and Non-IID federated data types. Compared to those methods with fixed clipping thresholds, ULDP-FED uses a threshold dynamic decay strategy to balance the noise error caused by the Gauss mechanism and the bias caused by update clipping. To save the privacy budget of each user, in each round, ULDP-FED relies on the similarity to replace the current local update with the historical noise updates. And then the user only sends the index of the gotten historical update to the server, which can reduce the communication cost. ULDP-FED is compared with existing methods over MNIST and CIFAR10 datasets. The experimental results show that our algorithm outperforms its competitors, and achieves the accurate results of federated learning.

       

    /

    返回文章
    返回