• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui. An Asynchronous Federated Learning Mechanism for Edge Network Computing[J]. Journal of Computer Research and Development, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754
Citation: Lu Xiaofeng, Liao Yuying, Pietro Lio, Pan Hui. An Asynchronous Federated Learning Mechanism for Edge Network Computing[J]. Journal of Computer Research and Development, 2020, 57(12): 2571-2582. DOI: 10.7544/issn1000-1239.2020.20190754

An Asynchronous Federated Learning Mechanism for Edge Network Computing

Funds: This work was supported by the National Natural Science Foundation of China (61472046), the Beijing Association for Science and Technology Seed Fund, and the Ant Financial Security Special Research Fund.
More Information
  • Published Date: November 30, 2020
  • With the continuous improvement of the performance of the IoT and mobile devices, a new type of computing architecture, edge computing, came into being. The emergence of edge computing has changed the situation where data needs to be uploaded to the cloud for data processing, fully utilizing the computing and storage capabilities of edge IoT devices. Edge nodes process private data locally and no longer need upload a large amount of data to the cloud for processing, reducing the transmission delay. The demand for implementing artificial intelligence frameworks on edge nodes is also increasing day by day. Because the federated learning mechanism does not require centralized data for model training, it is more suitable for edge network machine learning scenarios where the average amount of data of nodes is limited. This paper proposes an efficient asynchronous federated learning mechanism for edge network computing (EAFLM), which compresses the redundant communication between the nodes and the parameter server during the training process according to the self-adaptive threshold. The gradient update algorithm based on dual-weight correction allows nodes to join or withdraw from federated learning during any process of learning. Experimental results show that when the gradient communication is compressed to 8.77% of the original communication times, the accuracy of the test set is only reduced by 0.03%.
  • Related Articles

    [1]Wen Yimin, Yuan Zhe, Yu Hang. A New Semi-Supervised Inductive Transfer Learning Framework: Co-Transfer[J]. Journal of Computer Research and Development, 2023, 60(7): 1603-1614. DOI: 10.7544/issn1000-1239.202220232
    [2]Ma Xinyu, Fan Yixing, Guo Jiafeng, Zhang Ruqing, Su Lixin, Cheng Xueqi. An Empirical Investigation of Generalization and Transfer in Short Text Matching[J]. Journal of Computer Research and Development, 2022, 59(1): 118-126. DOI: 10.7544/issn1000-1239.20200626
    [3]Weng Zejia, Chen Jingjing, Jiang Yugang. On the Generalization of Face Forgery Detection with Domain Adversarial Learning[J]. Journal of Computer Research and Development, 2021, 58(7): 1476-1489. DOI: 10.7544/issn1000-1239.2021.20200803
    [4]Sun Xiaoyi, Liu Huafeng, Jing Liping, Yu Jian. Deep Generative Recommendation Based on List-Wise Ranking[J]. Journal of Computer Research and Development, 2020, 57(8): 1697-1706. DOI: 10.7544/issn1000-1239.2020.20200497
    [5]Zhuo Junbao, Su Chi, Wang Shuhui, Huang Qingming. Min-Entropy Transfer Adversarial Hashing[J]. Journal of Computer Research and Development, 2020, 57(4): 888-896. DOI: 10.7544/issn1000-1239.2020.20190476
    [6]Wang Ruiqin, Wu Zongda, Jiang Yunliang, Lou Jungang. An Integrated Recommendation Model Based on Two-stage Deep Learning[J]. Journal of Computer Research and Development, 2019, 56(8): 1661-1669. DOI: 10.7544/issn1000-1239.2019.20190178
    [7]Wen Yimin, Tang Shiqi, Feng Chao, Gao Kai. Online Transfer Learning for Mining Recurring Concept in Data Stream Classification[J]. Journal of Computer Research and Development, 2016, 53(8): 1781-1791. DOI: 10.7544/issn1000-1239.2016.20160223
    [8]Wei Wenhong, Wang Jiahai, Tao Ming, Yuan Huaqiang. Multi-Objective Constrained Differential Evolution Using Generalized Opposition-Based Learning[J]. Journal of Computer Research and Development, 2016, 53(6): 1410-1421. DOI: 10.7544/issn1000-1239.2016.20150806
    [9]Hong Jiaming, Yin Jian, Huang Yun, Liu Yubao, and Wang Jiahai. TrSVM: A Transfer Learning Algorithm Using Domain Similarity[J]. Journal of Computer Research and Development, 2011, 48(10): 1823-1830.
    [10]Mei Canhua, Zhang Yuhong, Hu Xuegang, and Li Peipei. A Weighted Algorithm of Inductive Transfer Learning Based on Maximum Entropy Model[J]. Journal of Computer Research and Development, 2011, 48(9): 1722-1728.
  • Cited by

    Periodical cited type(14)

    1. 廖涛,沈文龙,张顺香,马文祥. 基于对抗训练的事件要素识别方法. 计算机工程与设计. 2024(02): 540-545 .
    2. 冯钧,畅阳红,陆佳民,唐海麟,吕志鹏,邱钰淳. 基于大语言模型的水工程调度知识图谱的构建与应用. 计算机科学与探索. 2024(06): 1637-1647 .
    3. 林海香,白万胜,赵正祥,胡娜娜,李冬,陆人杰. 面向高速铁路道岔运维文本的知识抽取方法. 铁道科学与工程学报. 2024(07): 2569-2580 .
    4. 刘文亮,吴飞,何德明,赵维伟,潘建宏. 基于相异度矩阵的碎片化回复文本聚类方法. 计算机与现代化. 2024(09): 56-60 .
    5. 乔勇鹏 ,于亚新 ,刘树越 ,王子腾 ,夏子芳 ,乔佳琪 . 图卷积增强多路解码的实体关系联合抽取模型. 计算机研究与发展. 2023(01): 153-166 . 本站查看
    6. 杨延云,杜建强,聂斌,罗计根,贺佳. 融合数据增强和注意力机制的中医实体及关系联合抽取. 智能计算机与应用. 2023(08): 186-191+196 .
    7. 王哲,谢玮. 基于改进CycleGan模型的动画视频CDS仿真. 计算机仿真. 2022(01): 195-199 .
    8. 魏晓,王晓鑫,陈永琪,张惠然. 基于自然语言处理的材料领域知识图谱构建方法. 上海大学学报(自然科学版). 2022(03): 386-398 .
    9. 董哲,王亚,马传孝,李志军. 融合对抗训练和胶囊网络的食品安全关系抽取模型. 科学技术与工程. 2022(23): 10162-10168 .
    10. 吴玉,付雪峰,王涛. 基于改进级联二元标记框架的关系抽取方法. 南昌工程学院学报. 2022(06): 86-90+111 .
    11. 何俊,刘鹏,聂勇,吴慎珂,刘鹏政,钟可佳. 基于Seq2seq实体关系联合抽取的电力知识图谱构建. 实验室研究与探索. 2022(07): 1-5+17 .
    12. 田佳来,吕学强,游新冬,肖刚,韩君妹. 基于分层序列标注的实体关系联合抽取方法. 北京大学学报(自然科学版). 2021(01): 53-60 .
    13. 付雷杰,曹岩,白瑀,冷杰武. 国内垂直领域知识图谱发展现状与展望. 计算机应用研究. 2021(11): 3201-3214 .
    14. 张军莲,张一帆,汪鸣泉,黄永健. 基于图卷积神经网络的中文实体关系联合抽取. 计算机工程. 2021(12): 103-111 .

    Other cited types(19)

Catalog

    Article views (3124) PDF downloads (1795) Cited by(33)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return