高级检索
    王艳, 李念爽, 王希龄, 钟凤艳. 编码技术改进大规模分布式机器学习性能综述[J]. 计算机研究与发展, 2020, 57(3): 542-561. DOI: 10.7544/issn1000-1239.2020.20190286
    引用本文: 王艳, 李念爽, 王希龄, 钟凤艳. 编码技术改进大规模分布式机器学习性能综述[J]. 计算机研究与发展, 2020, 57(3): 542-561. DOI: 10.7544/issn1000-1239.2020.20190286
    Wang Yan, Li Nianshuang, Wang Xiling, Zhong Fengyan. Coding-Based Performance Improvement of Distributed Machine Learning in Large-Scale Clusters[J]. Journal of Computer Research and Development, 2020, 57(3): 542-561. DOI: 10.7544/issn1000-1239.2020.20190286
    Citation: Wang Yan, Li Nianshuang, Wang Xiling, Zhong Fengyan. Coding-Based Performance Improvement of Distributed Machine Learning in Large-Scale Clusters[J]. Journal of Computer Research and Development, 2020, 57(3): 542-561. DOI: 10.7544/issn1000-1239.2020.20190286

    编码技术改进大规模分布式机器学习性能综述

    Coding-Based Performance Improvement of Distributed Machine Learning in Large-Scale Clusters

    • 摘要: 由于分布式计算系统能为大数据分析提供大规模的计算能力,近年来受到了人们的广泛关注.在分布式计算系统中,存在某些计算节点由于各种因素的影响,计算速度会以某种随机的方式变慢,从而使运行在集群上的机器学习算法执行时间增加,这种节点叫作掉队节点(straggler).介绍了基于编码技术解决这些问题和改进大规模机器学习集群性能的研究进展.首先介绍编码技术和大规模机器学习集群的相关背景;其次将相关研究按照应用场景分成了应用于矩阵乘法、梯度计算、数据洗牌和一些其他应用,并分别进行了介绍分析;最后总结讨论了相关编码技术存在的困难并对未来的研究趋势进行了展望.

       

      Abstract: With the growth of models and data sets, running large-scale machine learning algorithms in distributed clusters has become a common method. This method divides the whole machine learning algorithm and training data into several tasks and each task runs on different worker nodes. Then, the results of all tasks are combined by master node to get the results of the whole algorithm. When there are a large number of nodes in distributed cluster, some worker nodes, called straggler, will inevitably slow down than other nodes due to resource competition and other reasons, which makes the task time of running on this node significantly higher than that of other nodes. Compared with running replica task on multiple nodes, coded computing shows an impact of efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in large-scale machine learning cluster.This paper introduces the research progress of solving the straggler issues and improving the performance of large-scale machine learning cluster based on coding technology. Firstly, we introduce the background of coding technology and large-scale machine learning cluster. Secondly, we divide the related research into several categories according to application scenarios: matrix multiplication, gradient computing, data shuffling and some other applications. Finally, we summarize the difficulties of applying coding technology in large-scale machine learning cluster and discuss the future research trends about it.

       

    /

    返回文章
    返回