• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
高级检索

一种面向边缘计算的高效异步联邦学习机制

芦效峰, 廖钰盈, Pietro Lio, Pan Hui

芦效峰, 廖钰盈, Pietro Lio, Pan Hui. 一种面向边缘计算的高效异步联邦学习机制[J]. 计算机研究与发展, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754
引用本文: 芦效峰, 廖钰盈, Pietro Lio, Pan Hui. 一种面向边缘计算的高效异步联邦学习机制[J]. 计算机研究与发展, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754
Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui. An Asynchronous Federated Learning Mechanism for Edge Network Computing[J]. Journal of Computer Research and Development, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754
Citation: Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui. An Asynchronous Federated Learning Mechanism for Edge Network Computing[J]. Journal of Computer Research and Development, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754
芦效峰, 廖钰盈, Pietro Lio, Pan Hui. 一种面向边缘计算的高效异步联邦学习机制[J]. 计算机研究与发展, 2020, 57(12): 2571-2582. CSTR: 32373.14.issn1000-1239.2020.20190754
引用本文: 芦效峰, 廖钰盈, Pietro Lio, Pan Hui. 一种面向边缘计算的高效异步联邦学习机制[J]. 计算机研究与发展, 2020, 57(12): 2571-2582. CSTR: 32373.14.issn1000-1239.2020.20190754
Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui. An Asynchronous Federated Learning Mechanism for Edge Network Computing[J]. Journal of Computer Research and Development, 2020, 57(12): 2571-2582. CSTR: 32373.14.issn1000-1239.2020.20190754
Citation: Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui. An Asynchronous Federated Learning Mechanism for Edge Network Computing[J]. Journal of Computer Research and Development, 2020, 57(12): 2571-2582. CSTR: 32373.14.issn1000-1239.2020.20190754

一种面向边缘计算的高效异步联邦学习机制

基金项目: 国家自然科学基金项目(61472046);北京市科学技术协会种子基金项目;蚂蚁金服安全专项科研基金项目
详细信息
  • 中图分类号: TP301.6

An Asynchronous Federated Learning Mechanism for Edge Network Computing

Funds: This work was supported by the National Natural Science Foundation of China (61472046), the Beijing Association for Science and Technology Seed Fund, and the Ant Financial Security Special Research Fund.
  • 摘要: 随着物联网和移动设备性能的不断提高,一种新型计算架构——边缘计算——应运而生.边缘计算的出现改变了数据需要集中上传到云端进行处理的局面,最大化利用边缘物联网设备的计算和存储能力.边缘计算节点对本地数据进行处理,不再需要把大量的本地数据上传到云端进行处理,减少了数据传输的延时.在边缘网络设备上进行人工智能运算的需求也在逐日增大,因为联邦学习机制不需要把数据集中后进行模型训练,所以更适合于节点平均数据量有限的边缘网络机器学习的场景.针对以上挑战,提出了一种面向边缘网络计算的高效异步联邦学习机制(efficient asynchronous federated learning mechanism for edge network computing, EAFLM),根据自适应的阈值对训练过程中节点与参数服务器之间的冗余通信进行压缩.其中,双重权重修正的梯度更新算法,允许节点在学习的任何过程中加入或退出联邦学习.实验显示提出的方法将梯度通信压缩至原通信次数的8.77%时,准确率仅降低0.03%.
    Abstract: With the continuous improvement of the performance of the IoT and mobile devices, a new type of computing architecture, edge computing, came into being. The emergence of edge computing has changed the situation where data needs to be uploaded to the cloud for data processing, fully utilizing the computing and storage capabilities of edge IoT devices. Edge nodes process private data locally and no longer need upload a large amount of data to the cloud for processing, reducing the transmission delay. The demand for implementing artificial intelligence frameworks on edge nodes is also increasing day by day. Because the federated learning mechanism does not require centralized data for model training, it is more suitable for edge network machine learning scenarios where the average amount of data of nodes is limited. This paper proposes an efficient asynchronous federated learning mechanism for edge network computing (EAFLM), which compresses the redundant communication between the nodes and the parameter server during the training process according to the self-adaptive threshold. The gradient update algorithm based on dual-weight correction allows nodes to join or withdraw from federated learning during any process of learning. Experimental results show that when the gradient communication is compressed to 8.77% of the original communication times, the accuracy of the test set is only reduced by 0.03%.
计量
  • 文章访问数:  3153
  • HTML全文浏览量:  19
  • PDF下载量:  1798
  • 被引次数: 0
出版历程
  • 发布日期:  2020-11-30

目录

    /

    返回文章
    返回