高级检索
    芦效峰, 廖钰盈, Pietro Lio, Pan Hui. 一种面向边缘计算的高效异步联邦学习机制[J]. 计算机研究与发展, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754
    引用本文: 芦效峰, 廖钰盈, Pietro Lio, Pan Hui. 一种面向边缘计算的高效异步联邦学习机制[J]. 计算机研究与发展, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754
    Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui. An Asynchronous Federated Learning Mechanism for Edge Network Computing[J]. Journal of Computer Research and Development, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754
    Citation: Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui. An Asynchronous Federated Learning Mechanism for Edge Network Computing[J]. Journal of Computer Research and Development, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754

    一种面向边缘计算的高效异步联邦学习机制

    An Asynchronous Federated Learning Mechanism for Edge Network Computing

    • 摘要: 随着物联网和移动设备性能的不断提高,一种新型计算架构——边缘计算——应运而生.边缘计算的出现改变了数据需要集中上传到云端进行处理的局面,最大化利用边缘物联网设备的计算和存储能力.边缘计算节点对本地数据进行处理,不再需要把大量的本地数据上传到云端进行处理,减少了数据传输的延时.在边缘网络设备上进行人工智能运算的需求也在逐日增大,因为联邦学习机制不需要把数据集中后进行模型训练,所以更适合于节点平均数据量有限的边缘网络机器学习的场景.针对以上挑战,提出了一种面向边缘网络计算的高效异步联邦学习机制(efficient asynchronous federated learning mechanism for edge network computing, EAFLM),根据自适应的阈值对训练过程中节点与参数服务器之间的冗余通信进行压缩.其中,双重权重修正的梯度更新算法,允许节点在学习的任何过程中加入或退出联邦学习.实验显示提出的方法将梯度通信压缩至原通信次数的8.77%时,准确率仅降低0.03%.

       

      Abstract: With the continuous improvement of the performance of the IoT and mobile devices, a new type of computing architecture, edge computing, came into being. The emergence of edge computing has changed the situation where data needs to be uploaded to the cloud for data processing, fully utilizing the computing and storage capabilities of edge IoT devices. Edge nodes process private data locally and no longer need upload a large amount of data to the cloud for processing, reducing the transmission delay. The demand for implementing artificial intelligence frameworks on edge nodes is also increasing day by day. Because the federated learning mechanism does not require centralized data for model training, it is more suitable for edge network machine learning scenarios where the average amount of data of nodes is limited. This paper proposes an efficient asynchronous federated learning mechanism for edge network computing (EAFLM), which compresses the redundant communication between the nodes and the parameter server during the training process according to the self-adaptive threshold. The gradient update algorithm based on dual-weight correction allows nodes to join or withdraw from federated learning during any process of learning. Experimental results show that when the gradient communication is compressed to 8.77% of the original communication times, the accuracy of the test set is only reduced by 0.03%.

       

    /

    返回文章
    返回