Abstract:
Federated learning (FL) is a distributed machine learning framework that can be used to solve the data silos problem. Using the framework multiple participants collaborate to train a global model while keeping the data locally private. However, the traditional federated learning ignores the importance of fairness, which may influence the quality of the trained global model. As different participants hold different magnitudes data which are highly heterogeneous, traditional training methods such as natively minimizing an aggregate loss function may disproportionately advantage or disadvantage some of the devices. Thus the final global model shows a large gap in accuracy on different participants’ data. To train a global model in a more fair manner, we propose a fairness method called α-FedAvg. Using α-FedAvg participants can obtain a global model. That is, the final global model trained by all participants allows a more balanced distribution of accuracy on the participants’ local data. Meanwhile, we devise a method to yield the parameter α, which can improve the fairness of the global model while ensuring its performance. To evaluate our scheme, we test the global model on MNIST and CIFAR-10 datasets. Meanwhile, we compare α-FedAvg with other three fairness schemes on multiple datasets. Compared with existing schemes, our scheme achieves a better balance between fairness and effectiveness.