高级检索
    朱泓睿, 元国军, 姚成吉, 谭光明, 王展, 户忠哲, 张晓扬, 安学军. 分布式深度学习训练网络综述[J]. 计算机研究与发展, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881
    引用本文: 朱泓睿, 元国军, 姚成吉, 谭光明, 王展, 户忠哲, 张晓扬, 安学军. 分布式深度学习训练网络综述[J]. 计算机研究与发展, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881
    Zhu Hongrui, Yuan Guojun, Yao Chengji, Tan Guangming, Wang Zhan, Hu Zhongzhe, Zhang Xiaoyang, An Xuejun. Survey on Network of Distributed Deep Learning Training[J]. Journal of Computer Research and Development, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881
    Citation: Zhu Hongrui, Yuan Guojun, Yao Chengji, Tan Guangming, Wang Zhan, Hu Zhongzhe, Zhang Xiaoyang, An Xuejun. Survey on Network of Distributed Deep Learning Training[J]. Journal of Computer Research and Development, 2021, 58(1): 98-115. DOI: 10.7544/issn1000-1239.2021.20190881

    分布式深度学习训练网络综述

    Survey on Network of Distributed Deep Learning Training

    • 摘要: 近年来深度学习在图像、语音、自然语言处理等诸多领域得到广泛应用,但随着人们对深度学习的训练速度和数据处理能力的需求不断提升,传统的基于单机的训练过程愈发难以满足要求,分布式的深度学习训练方法成为持续提升算力的有效途径.其中训练过程中节点间网络的通信性能至关重要,直接影响训练性能.分析了分布式深度学习中的性能瓶颈,在此基础上对目前常用的网络性能优化方案进行综述,详细阐述了目前最新的超大规模分布式训练的体系结构、优化方法、训练环境和最有效的优化方法,最后对分布式训练仍然存在的困难进行了总结,对其未来研究方向进行了展望.

       

      Abstract: In recent years, deep learning has achieved better results than traditional algorithms in many fields such as image, speech, and natural language processing. People are increasingly demanding training speed and data processing capabilities for deep learning. However, the calculating ability of a single server has a limit and cannot achieve human demands. Distributed deep learning training has become the most effective method to expand deep learning training computing ability. At present, distributed deep learning faces a training bottleneck due to communication problems in the network during the training process which leads the communication network to be the most influential factor. There are currently many network performance optimization researches for distributed deep learning. In this paper, the main performance bottlenecks and optimization schemes are firstly demonstrated. Then the current state-of-art ultra-large-scale distributed training architecture and methods for optimization performance are specifically analyzed. Finally, a comparative summary of each performance optimization scheme and the difficulties still existing in distributed deep learning training are given, and the future research directions are pointed out as well.

       

    /

    返回文章
    返回