Citation: | Feng Yiming, Qian Zhen, Li Guanghui, Dai Chenglong. Synergistic Optimization Method for Adaptive Hierarchical Federated Learning in Hetero-geneous Edge Environments[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202550146 |
Traditional hierarchical federated learning (HFL) encounters significant challenges in real world due to device heterogeneity, data heterogeneity (e.g., variations in data volume and feature distribution), and communication resource constraints. Device heterogeneity results in inefficient cross-device collaboration during model training, whereas data heterogeneity induces accuracy degradation and diminished generalization capabilities in the global model. To address these limitations while maximizing the utilization of computation, communication, and data resources in the heterogeneous edge networks, we propose an adaptive synergistic method for hierarchical federated learning. This method synergistically integrates model partitioning and client selection under hardware resource constraints, communication bottlenecks, and non-independent and identically distributed (Non-IID) data conditions to accelerate federated learning training while enhancing model accuracy and adaptability across heterogeneous environments. To quantify the influence of local datasets on global model convergence, a data contribution metric is introduced to evaluate the consistency of client contributions. Furthermore, by integrating deep reinforcement learning (DRL) with real-time resource monitoring and data contribution quantification, the DRL agent dynamically optimizes client selection and edge-cloud collaborative model partitioning strategies prior to each training iteration. This adaptive mechanism leverages system resource availability (e.g., bandwidth, device status) and local data contribution scores to derive optimal policies, thereby accelerating training convergence and enhancing global model accuracy. Simulation results demonstrate that the proposed method achieves significant improvements in model accuracy and training efficiency compared with baseline methods, while exhibiting robust adaptability across diverse heterogeneous environment configurations.
[1] |
孙兵,刘艳,王田,等. 移动边缘网络中联邦学习效率优化综述[J]. 计算机研究与发展,2022,59(7):1439−1469 doi: 10.7544/issn1000-1239.20210119
Sun Bing, Liu Yan, Wang Tian, et al. Survey on efficiency optimization of federated learning in mobile edge networks[J]. Journal of Computer Research and Development, 2022, 59(7): 1439−1469 (in Chinese) doi: 10.7544/issn1000-1239.20210119
|
[2] |
张雪晴,刘延伟,刘金霞,等. 面向边缘智能的联邦学习综述[J]. 计算机研究与发展,2023,60(6):1276−1295 doi: 10.7544/issn1000-1239.202111100
Zhang Xueqing, Liu Yanwei, Liu Jinxia, et al. Survey on federated learning for edge intelligence[J]. Journal of Computer Research and Development, 2023, 60(6): 1276−1295 (in Chinese) doi: 10.7544/issn1000-1239.202111100
|
[3] |
Ye Mang, Fang Xiuwen, Du Bo, et al. Heterogeneous federated learning: State-of-the-art and research challenges[J]. ACM Computing Surveys, 2024, 56(3): 1−44
|
[4] |
Verma D C, Julier S. J, Cirincione G. Federated AI for building AI solutions across multiple agencies[J]. arXiv preprint, arXiv: 1809.10036, 2018
|
[5] |
Liu Lumin, Zhang Jun, Song S H, et al. Client-edge-cloud hierarchical federated learning[C]//Proc of the 2020 IEEE Int Conf on Communications (ICC). Piscataway, NJ: IEEE, 2020: 1−6
|
[6] |
Abdellatif A A, Mhaisen N, Mohamed A, et al. Communication-efficient hierarchical federated learning for IoT heterogeneous systems with imbalanced data[J]. Future Generation Computer Systems, 2022, 128: 406−419 doi: 10.1016/j.future.2021.10.016
|
[7] |
Wu Wentai, He Ligang, Lin Weiwei, et al. Accelerating federated learning over reliability-agnostic clients in mobile edge computing systems[J]. IEEE Transactions on Parallel and Distributed Systems, 2021, 32(7): 1539−1551
|
[8] |
Wang Lun, Xu Yang, Xu Hongli, et al. Accelerating decentralized federated learning in heterogeneous edge computing[J]. IEEE Transactions on Mobile Computing, 2023, 22(9): 5001−5016
|
[9] |
黄文柯,叶茫,杜博. 自适应异构联邦学习[J]. 中国图象图形学报,2024,29(7):1849−1860 doi: 10.11834/jig.230239
Huang Wenke, Ye Mang, Du Bo. Adaptive heterogeneous federated learning[J]. Journal of Image and Graphics, 2024, 29(7): 1849−1860 (in Chinese) doi: 10.11834/jig.230239
|
[10] |
唐晓岚,梁煜婷,陈文龙. 面向非独立同分布数据的车联网多阶段联邦学习机制[J]. 计算机研究与发展,2024,61(9):2170−2184 doi: 10.7544/issn1000-1239.202330885
Tang Xiaolan, Liang Yuting, Chen Wenlong. Multi-stage federated learning mechanism for non-IID data in Internet of vehicles[J]. Journal of Computer Research and Development, 2024, 61(9): 2170−2184 (in Chinese) doi: 10.7544/issn1000-1239.202330885
|
[11] |
Diao Enmao, Ding Jie, Tarokh V. HeteroFL: Computation and communication efficient federated learning for heterogeneous clients[J]. arXiv preprint, arXiv: 2010. 01264, 2021
|
[12] |
刘艳,王田,彭绍亮,等. 基于边缘的联邦学习模型清洗和设备聚类方法[J]. 计算机学报,2021,44(12):2515−2528 doi: 10.11897/SP.J.1016.2021.02515
Liu Yan, Wang Tian, Peng Shaoliang, et al. Federated learning model cleansing and device clustering method based on edge computing[J]. Chinese Journal of Computers, 2021, 44(12): 2515−2528 (in Chinese) doi: 10.11897/SP.J.1016.2021.02515
|
[13] |
王汝言,陈伟,张普宁,等. 异构物联网下资源高效的分层协同联邦学习方法[J]. 电子与信息学报,2023,45(8):2847−2855 doi: 10.11999/JEIT220914
Wang Ruyan, Chen Wei, Zhang Puning, et al. Resource-efficient hierarchical collaborative federated learning for heterogeneous IoT[J]. Journal of Electronics & Information Technology, 2023, 45(8): 2847−2855 (in Chinese) doi: 10.11999/JEIT220914
|
[14] |
钟正仪,包卫东,王吉,等. 一种面向云边端系统的分层异构联邦学习方法[J]. 计算机研究与发展,2022,59(11):2408−2422 doi: 10.7544/issn1000-1239.20220458
Zhong Zhengyi, Bao Weidong, Wang Ji, et al. A hierarchical heterogeneous federated learning method for cloud-edge-end systems[J]. Journal of Computer Research and Development, 2022, 59(11): 2408−2422 (in Chinese) doi: 10.7544/issn1000-1239.20220458
|
[15] |
高雨佳,王鹏飞,刘亮,等. 基于注意力增强元学习网络的个性化联邦学习方法[J]. 计算机研究与发展,2024,61(1):196−208 doi: 10.7544/issn1000-1239.202220922
Gao Yujia, Wang Pengfei, Liu Liang, et al. Personalized federated learning method based on attention-enhanced meta-learning network[J]. Journal of Computer Research and Development, 2024, 61(1): 196−208 (in Chinese) doi: 10.7544/issn1000-1239.202220922
|
[16] |
Mishra R, Gupta H P. A model personalization-based federated learning approach for heterogeneous participants with variability in the dataset[J]. ACM Transactions on Sensor Networks, 2024, 20(1): 1−28
|
[17] |
Zhang Yingqi, Xia Hui, Xu Shou, et al. AdaptFL: Adaptive federated learning framework for heterogeneous devices[J]. Future Generation Computer Systems, 2025, 165: 107610
|
[18] |
贺文晨,郭少勇,邱雪松,等. 基于DRL的联邦学习节点选择方法[J]. 通信学报,2021,42(6):62−71 doi: 10.11959/j.issn.1000-436x.2021111
He Wenchen, Guo Shaoyong, Qiu Xuesong, et al. DRL-based node selection method for federated learning[J]. Journal on Communications, 2021, 42(6): 62−71 (in Chinese) doi: 10.11959/j.issn.1000-436x.2021111
|
[19] |
杜甜,陈星延,寇纲,等. 面向云边个性化模型解耦的聚类联邦学习方法[J]. 计算机学报,2025,48(2):407−432 doi: 10.11897/SP.J.1016.2025.00407
Du Tian, Chen Xingyan, Kou Gang, et al. Clustered federated learning for cloud-edge personalized model decoupling[J]. Chinese Journal of Computers, 2025, 48(2): 407−432 (in Chinese) doi: 10.11897/SP.J.1016.2025.00407
|
[20] |
Qu Zhe, Duan Rui, Chen Lixing, et al. Context-aware online client selection for hierarchical federated learning[J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(12): 4353−4367 doi: 10.1109/TPDS.2022.3186960
|
[21] |
Sun Qiheng, Li Xiang, Zhang Jiayao, et al. ShapleyFL: Robust federated learning based on Shapley value[C]//Proc of the 29th ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2023: 2096−2108
|
[22] |
莫梓嘉,高志鹏,杨杨,等. 面向车联网数据隐私保护的高效分布式模型共享策略[J]. 通信学报,2022,43(4):83−94 doi: 10.11959/j.issn.1000-436x.2022074
Mo Zijia, Gao Zhipeng, Yang Yang, et al. Efficient distributed model sharing strategy for privacy-preserving in IoV data[J]. Journal on Communications, 2022, 43(4): 83−94 (in Chinese) doi: 10.11959/j.issn.1000-436x.2022074
|
[23] |
Thapa C, Arachchige P C M, Camtepe S, et al. SplitFed: When federated learning meets split learning[C]//Proc of the AAAI Conf on Artificial Intelligence. New York: ACM, 2022: 8485−8493
|
[24] |
Jeon J, Kim J. Privacy-sensitive parallel split learning[C]//Proc of the 2020 Int Conf on Information Networking (ICOIN). Piscataway, NJ: IEEE, 2020: 7−9
|
[25] |
Turina V, Zhang Zongshun, Esposito F, et al. Federated or split? A performance and privacy analysis of hybrid split and federated learning architectures[C]//Proc of the 2021 IEEE 14th Int Conf on Cloud Computing (CLOUD). Piscataway, NJ: IEEE, 2021: 250−260
|
[26] |
Wang Zhiyuan, Xu Hongli, Xu Yang, et al. CoopFL: Accelerating federated learning with DNN partitioning and offloading in heterogeneous edge computing[J]. Computer Networks, 2023, 220: 109490
|
[27] |
Wu Di, Ullah R, Harvey P, et al. FedAdapt: Adaptive offloading for IoT devices in federated learning[J]. IEEE Internet of Things Journal, 2022, 9(21): 20889−20901 doi: 10.1109/JIOT.2022.3176469
|
[28] |
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84−90 doi: 10.1145/3065386
|
[29] |
Shahriar S, Lund B, Mannuru N R, et al. Putting GPT−4o to the sword: A comprehensive evaluation of language, vision, speech, and multimodal proficiency[J]. arXiv preprint, arXiv. 2407.09519, 2024
|
[30] |
Zubov D, Kupin A. Performance Evaluation of Raspberry Pi 4B Microcomputer: Case Studies on MPICH Cluster, VMware ESXi ARM fling, and Windows 11 ARM OS[M]. Berlin: Springer, 2022
|
[31] |
Czaja J, Gallus M, Wozna J, et al. Applying the Roofline model for deep learning performance optimizations[J]. arXiv preprint, arXiv: 2009.11224, 2020
|
[32] |
Deng Yongheng, Ren Ju, Tang Cheng, et al. A hierarchical knowledge transfer framework for heterogeneous federated learning[C]//Proc of the 2023 IEEE Conf on Computer Communications (INFOCOM). Piscataway, NJ: IEEE, 2023: 1−10
|
[33] |
Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images[EB]. Handbook of Systemic Autoimmune Diseases Toronto: university of Toronto, 2009
|
[34] |
Xiao Han, Rasul K, Vollgraf R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms[J]. arXiv preprint, arXiv: 1708.07747, 2017
|
[35] |
He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Deep residual learning for image recognition[J]. arXiv preprint, arXiv: 1512.03385, 2015
|
[36] |
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint, arXiv: 1409.1556, 2014
|
[1] | Li Song, Cao Wenqi, Hao Xiaohong, Zhang Liping, Hao Zhongxiao. Collective Spatial Keyword Query Based on Time-Distance Constrained and Cost Aware[J]. Journal of Computer Research and Development, 2025, 62(3): 808-819. DOI: 10.7544/issn1000-1239.202330815 |
[2] | Wang Kaifan, Xu Yinan, Yu Zihao, Tang Dan, Chen Guokai, Chen Xi, Gou Lingrui, Hu Xuan, Jin Yue, Li Qianruo, Li Xin, Lin Jiawei, Liu Tong, Liu Zhigang, Wang Huaqiang, Wang Huizhe, Zhang Chuanqi, Zhang Fawang, Zhang Linjuan, Zhang Zifei, Zhang Ziyue, Zhao Yangyang, Zhou Yaoyang, Zou Jiangrui, Cai Ye, Huan Dandan, Li Zusong, Zhao Jiye, He Wei, Sun Ninghui, Bao Yungang. XiangShan Open-Source High Performance RISC-V Processor Design and Implementation[J]. Journal of Computer Research and Development, 2023, 60(3): 476-493. DOI: 10.7544/issn1000-1239.202221036 |
[3] | Ren Hao, Liu Baisong, Sun Jinyang, Dong Qian, Qian Jiangbo. A Time and Relation-Aware Graph Collaborative Filtering for Cross-Domain Sequential Recommendation[J]. Journal of Computer Research and Development, 2023, 60(1): 112-124. DOI: 10.7544/issn1000-1239.202110545 |
[4] | Zhang Tong, Feng Jiaqi, Ma Yanying, Qu Siyuan, Ren Fengyuan. Survey on Traffic Scheduling in Time-Sensitive Networking[J]. Journal of Computer Research and Development, 2022, 59(4): 747-764. DOI: 10.7544/issn1000-1239.20210203 |
[5] | Cui Yuanning, Li Jing, Shen Li, Shen Yang, Qiao Lin, Bo Jue. Duration-HyTE: A Time-Aware Knowledge Representation Learning Method Based on Duration Modeling[J]. Journal of Computer Research and Development, 2020, 57(6): 1239-1251. DOI: 10.7544/issn1000-1239.2020.20190253 |
[6] | Zheng Xiao, Gao Han, Wang Xiujun, Qin Feng. Contact Duration Aware Cooperative Data Caching in Mobile Opportunistic Networks[J]. Journal of Computer Research and Development, 2018, 55(2): 338-345. DOI: 10.7544/issn1000-1239.2018.20160929 |
[7] | Wang Chong, Lü Yinrun, Chen Li, Wang Xiuli, Wang Yongji. Survey on Development of Solving Methods and State-of-the-Art Applications of Satisfiability Modulo Theories[J]. Journal of Computer Research and Development, 2017, 54(7): 1405-1425. DOI: 10.7544/issn1000-1239.2017.20160303 |
[8] | Chen Huangke, Zhu Jianghan, Zhu Xiaomin, Ma Manhao, Zhang Zhenshi. Resource-Delay-Aware Scheduling for Real-Time Tasks in Clouds[J]. Journal of Computer Research and Development, 2017, 54(2): 446-456. DOI: 10.7544/issn1000-1239.2017.20151123 |
[9] | Zhou Hang, Huang Zhiqiu, Zhu Yi, Xia Liang, Liu Linyuan. Real-Time Systems Contact Checking and Resolution Based on Time Petri Net[J]. Journal of Computer Research and Development, 2012, 49(2): 413-420. |
[10] | Zhou Hang, Huang Zhiqiu, Hu Jun, Zhu Yi. Real-Time System Resource Conflict Checking Based on Time Petri Nets[J]. Journal of Computer Research and Development, 2009, 46(9): 1578-1585. |