ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2020, Vol. 57 ›› Issue (3): 542-561.doi: 10.7544/issn1000-1239.2020.20190286

• 人工智能 • 上一篇    下一篇

编码技术改进大规模分布式机器学习性能综述

王艳,李念爽,王希龄,钟凤艳   

  1. (华东交通大学软件学院 南昌 330013) (wangyann@189.cn)
  • 出版日期: 2020-03-01
  • 基金资助: 
    国家自然科学基金项目(61402172);江西省自然科学基金项目(20192BAB217006)

Coding-Based Performance Improvement of Distributed Machine Learning in Large-Scale Clusters

Wang Yan, Li Nianshuang, Wang Xiling, Zhong Fengyan   

  1. (School of Software, East China Jiaotong University, Nanchang 330013)
  • Online: 2020-03-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61402172) and the Natural Science Foundation of Jiangxi Province of China (20192BAB217006).

摘要: 由于分布式计算系统能为大数据分析提供大规模的计算能力,近年来受到了人们的广泛关注.在分布式计算系统中,存在某些计算节点由于各种因素的影响,计算速度会以某种随机的方式变慢,从而使运行在集群上的机器学习算法执行时间增加,这种节点叫作掉队节点(straggler).介绍了基于编码技术解决这些问题和改进大规模机器学习集群性能的研究进展.首先介绍编码技术和大规模机器学习集群的相关背景;其次将相关研究按照应用场景分成了应用于矩阵乘法、梯度计算、数据洗牌和一些其他应用,并分别进行了介绍分析;最后总结讨论了相关编码技术存在的困难并对未来的研究趋势进行了展望.

关键词: 编码技术, 机器学习, 分布式计算, 掉队节点容忍, 性能优化

Abstract: With the growth of models and data sets, running large-scale machine learning algorithms in distributed clusters has become a common method. This method divides the whole machine learning algorithm and training data into several tasks and each task runs on different worker nodes. Then, the results of all tasks are combined by master node to get the results of the whole algorithm. When there are a large number of nodes in distributed cluster, some worker nodes, called straggler, will inevitably slow down than other nodes due to resource competition and other reasons, which makes the task time of running on this node significantly higher than that of other nodes. Compared with running replica task on multiple nodes, coded computing shows an impact of efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in large-scale machine learning cluster.This paper introduces the research progress of solving the straggler issues and improving the performance of large-scale machine learning cluster based on coding technology. Firstly, we introduce the background of coding technology and large-scale machine learning cluster. Secondly, we divide the related research into several categories according to application scenarios: matrix multiplication, gradient computing, data shuffling and some other applications. Finally, we summarize the difficulties of applying coding technology in large-scale machine learning cluster and discuss the future research trends about it.

Key words: coding technology, machine learning, distributed computing, stragglers tolerate, performance improvement

中图分类号: