ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2021, Vol. 58 ›› Issue (1): 98-115.doi: 10.7544/issn1000-1239.2021.20190881

Previous Articles     Next Articles

Survey on Network of Distributed Deep Learning Training

Zhu Hongrui1,2, Yuan Guojun1, Yao Chengji3, Tan Guangming1, Wang Zhan1, Hu Zhongzhe1,2,3, Zhang Xiaoyang1,2,3, An Xuejun1   

  1. 1(Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190);2(University of Chinese Academy of Sciences, Beijing 100049);3(Megvii Inc., Beijing 100080)
  • Online:2021-01-01
  • Supported by: 
    This work was supported by the CAS Strategic Priority Program(B) (XDB24050200), the General Program of the National Natural Science Foundation of China (61972380, 61702484), and the Innovation Fund from the Institute of Computing Technology, Chinese Academy of Sciences (20166060).

Abstract: In recent years, deep learning has achieved better results than traditional algorithms in many fields such as image, speech, and natural language processing. People are increasingly demanding training speed and data processing capabilities for deep learning. However, the calculating ability of a single server has a limit and cannot achieve human demands. Distributed deep learning training has become the most effective method to expand deep learning training computing ability. At present, distributed deep learning faces a training bottleneck due to communication problems in the network during the training process which leads the communication network to be the most influential factor. There are currently many network performance optimization researches for distributed deep learning. In this paper, the main performance bottlenecks and optimization schemes are firstly demonstrated. Then the current state-of-art ultra-large-scale distributed training architecture and methods for optimization performance are specifically analyzed. Finally, a comparative summary of each performance optimization scheme and the difficulties still existing in distributed deep learning training are given, and the future research directions are pointed out as well.

Key words: distributed calculating, deep learning, communication network, performance optimization, collective communication, cluster network

CLC Number: