Advanced Search
    Zhu Hongrui, Yuan Guojun, Yao Chengji, Tan Guangming, Wang Zhan, Hu Zhongzhe, Zhang Xiaoyang, An Xuejun. Survey on Network of Distributed Deep Learning Training[J]. Journal of Computer Research and Development, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881
    Citation: Zhu Hongrui, Yuan Guojun, Yao Chengji, Tan Guangming, Wang Zhan, Hu Zhongzhe, Zhang Xiaoyang, An Xuejun. Survey on Network of Distributed Deep Learning Training[J]. Journal of Computer Research and Development, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881

    Survey on Network of Distributed Deep Learning Training

    • In recent years, deep learning has achieved better results than traditional algorithms in many fields such as image, speech, and natural language processing. People are increasingly demanding training speed and data processing capabilities for deep learning. However, the calculating ability of a single server has a limit and cannot achieve human demands. Distributed deep learning training has become the most effective method to expand deep learning training computing ability. At present, distributed deep learning faces a training bottleneck due to communication problems in the network during the training process which leads the communication network to be the most influential factor. There are currently many network performance optimization researches for distributed deep learning. In this paper, the main performance bottlenecks and optimization schemes are firstly demonstrated. Then the current state-of-art ultra-large-scale distributed training architecture and methods for optimization performance are specifically analyzed. Finally, a comparative summary of each performance optimization scheme and the difficulties still existing in distributed deep learning training are given, and the future research directions are pointed out as well.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return