高级检索
    田家会, 吕锡香, 邹仁朋, 赵斌, 李一戈. 一种联邦学习中的公平资源分配方案[J]. 计算机研究与发展, 2022, 59(6): 1240-1254. DOI: 10.7544/issn1000-1239.20201081
    引用本文: 田家会, 吕锡香, 邹仁朋, 赵斌, 李一戈. 一种联邦学习中的公平资源分配方案[J]. 计算机研究与发展, 2022, 59(6): 1240-1254. DOI: 10.7544/issn1000-1239.20201081
    Tian Jiahui, Lü Xixiang, Zou Renpeng, Zhao Bin, Li Yige. A Fair Resource Allocation Scheme in Federated Learning[J]. Journal of Computer Research and Development, 2022, 59(6): 1240-1254. DOI: 10.7544/issn1000-1239.20201081
    Citation: Tian Jiahui, Lü Xixiang, Zou Renpeng, Zhao Bin, Li Yige. A Fair Resource Allocation Scheme in Federated Learning[J]. Journal of Computer Research and Development, 2022, 59(6): 1240-1254. DOI: 10.7544/issn1000-1239.20201081

    一种联邦学习中的公平资源分配方案

    A Fair Resource Allocation Scheme in Federated Learning

    • 摘要: 联邦学习(federated learning, FL)是一种可用于解决数据孤岛问题的分布式机器学习框架,多个参与方在保持数据本地私有的情况下协作训练一个共同模型.但是,传统的联邦学习没有考虑公平性的问题,在实际场景中,参与者之间的数据具有高度异构和数据量差距较大的特点,常规的聚合操作会不经意地偏向一些设备,使得最终聚合模型在不同参与者数据上的准确率表现出较大差距.针对这一问题,提出了一种有效的公平算法,称为α-FedAvg.它可以使聚合模型更公平,即其在所有参与者本地数据上的准确率分布更均衡.同时,给出了确定参数α的方法,能够在尽可能保证聚合模型性能的情况下提升其公平性.最后,在MNIST和CIFAR-10数据集上进行了实验和性能分析,并在多个数据集上与其他3种公平方案进行了对比.实验结果表明:相较于已有算法,所提方案在公平性和有效性上达到了更好的平衡.

       

      Abstract: Federated learning (FL) is a distributed machine learning framework that can be used to solve the data silos problem. Using the framework multiple participants collaborate to train a global model while keeping the data locally private. However, the traditional federated learning ignores the importance of fairness, which may influence the quality of the trained global model. As different participants hold different magnitudes data which are highly heterogeneous, traditional training methods such as natively minimizing an aggregate loss function may disproportionately advantage or disadvantage some of the devices. Thus the final global model shows a large gap in accuracy on different participants’ data. To train a global model in a more fair manner, we propose a fairness method called α-FedAvg. Using α-FedAvg participants can obtain a global model. That is, the final global model trained by all participants allows a more balanced distribution of accuracy on the participants’ local data. Meanwhile, we devise a method to yield the parameter α, which can improve the fairness of the global model while ensuring its performance. To evaluate our scheme, we test the global model on MNIST and CIFAR-10 datasets. Meanwhile, we compare α-FedAvg with other three fairness schemes on multiple datasets. Compared with existing schemes, our scheme achieves a better balance between fairness and effectiveness.

       

    /

    返回文章
    返回