Zhu Hongrui, Yuan Guojun, Yao Chengji, Tan Guangming, Wang Zhan, Hu Zhongzhe, Zhang Xiaoyang, An Xuejun. Survey on Network of Distributed Deep Learning Training[J]. Journal of Computer Research and Development, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881
Citation:
Zhu Hongrui, Yuan Guojun, Yao Chengji, Tan Guangming, Wang Zhan, Hu Zhongzhe, Zhang Xiaoyang, An Xuejun. Survey on Network of Distributed Deep Learning Training[J]. Journal of Computer Research and Development, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881
Zhu Hongrui, Yuan Guojun, Yao Chengji, Tan Guangming, Wang Zhan, Hu Zhongzhe, Zhang Xiaoyang, An Xuejun. Survey on Network of Distributed Deep Learning Training[J]. Journal of Computer Research and Development, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881
Citation:
Zhu Hongrui, Yuan Guojun, Yao Chengji, Tan Guangming, Wang Zhan, Hu Zhongzhe, Zhang Xiaoyang, An Xuejun. Survey on Network of Distributed Deep Learning Training[J]. Journal of Computer Research and Development, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881
1(Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190)
2(University of Chinese Academy of Sciences, Beijing 100049)
3(Megvii Inc., Beijing 100080)
Funds: This work was supported by the CAS Strategic Priority Program(B) (XDB24050200), the General Program of the National Natural Science Foundation of China (61972380, 61702484), and the Innovation Fund from the Institute of Computing Technology, Chinese Academy of Sciences (20166060).
In recent years, deep learning has achieved better results than traditional algorithms in many fields such as image, speech, and natural language processing. People are increasingly demanding training speed and data processing capabilities for deep learning. However, the calculating ability of a single server has a limit and cannot achieve human demands. Distributed deep learning training has become the most effective method to expand deep learning training computing ability. At present, distributed deep learning faces a training bottleneck due to communication problems in the network during the training process which leads the communication network to be the most influential factor. There are currently many network performance optimization researches for distributed deep learning. In this paper, the main performance bottlenecks and optimization schemes are firstly demonstrated. Then the current state-of-art ultra-large-scale distributed training architecture and methods for optimization performance are specifically analyzed. Finally, a comparative summary of each performance optimization scheme and the difficulties still existing in distributed deep learning training are given, and the future research directions are pointed out as well.