Abstract:
Federated learning is a distributed machine learning framework, which enables clients to conduct model training without transmitting their data to the servers. It is used to solve the dilemma of data silos and data privacy. It can work well on clients with similar data characteristics and distribution. However, in many scenarios, the differences of data distribution cause difficulties in global model training. Therefore, personalized federated learning is proposed as a new federated learning paradigm. It aims to guarantee the effectiveness of client personalized models through the collaboration between clients and the servers. Intuitively, providing a tighter collaboration for clients with similar data characteristics and distribution can facilitate the construction of personalized models. However, due to the invisibility of client data, it is a challenge to extract client features at a fine-grained level and define the collaborative relationships between clients. In this paper, we design an attention-enhanced meta-learning network (AMN) to address this issue. AMN can utilize model parameters as features and train the meta-learning network to provide an additional meta-model for each client to automatically analyze client feature similarity. According to two-layers framework of AMN, a trade-off between clients’ personality and commonality can be reasonably achieved, and a hybrid model with useful information from all clients is provided. Considering the need to maintain both the meta-model and the client’s local model during the training process, we design an alternative training strategy to perform the training in an end-to-end manner. To demonstrate the effectiveness of our method, we conduct extensive experiments on two benchmark datasets and eight baseline methods. Compared with the existing best-performing personalized federated learning methods, our method improves the accuracy rate by 3.39% and 2.45% on average in two datasets.