高级检索
    纪泽宇, 张兴军, 付哲, 高柏松, 李靖波. 分布式深度学习框架下基于性能感知的DBS-SGD算法[J]. 计算机研究与发展, 2019, 56(11): 2396-2409. DOI: 10.7544/issn1000-1239.2019.20180880
    引用本文: 纪泽宇, 张兴军, 付哲, 高柏松, 李靖波. 分布式深度学习框架下基于性能感知的DBS-SGD算法[J]. 计算机研究与发展, 2019, 56(11): 2396-2409. DOI: 10.7544/issn1000-1239.2019.20180880
    Ji Zeyu, Zhang Xingjun, Fu Zhe, Gao Bosong, Li Jingbo. Performance-Awareness Based Dynamic Batch Size SGD for Distributed Deep Learning Framework[J]. Journal of Computer Research and Development, 2019, 56(11): 2396-2409. DOI: 10.7544/issn1000-1239.2019.20180880
    Citation: Ji Zeyu, Zhang Xingjun, Fu Zhe, Gao Bosong, Li Jingbo. Performance-Awareness Based Dynamic Batch Size SGD for Distributed Deep Learning Framework[J]. Journal of Computer Research and Development, 2019, 56(11): 2396-2409. DOI: 10.7544/issn1000-1239.2019.20180880

    分布式深度学习框架下基于性能感知的DBS-SGD算法

    Performance-Awareness Based Dynamic Batch Size SGD for Distributed Deep Learning Framework

    • 摘要: 通过增加模型的深度以及训练数据的样本数量,深度神经网络模型能够在多个机器学习任务中获得更好的性能,然而这些必要的操作会使得深度神经网络模型训练的开销相应增大.因此为了更好地应对大量的训练开销,在分布式计算环境中对深度神经网络模型的训练过程进行加速成为了研发人员最常用的手段.随机梯度下降(stochastic gradient descent, SGD)算法是当前深度神经网络模型中最常见的训练算法之一,然而SGD在进行并行化的时候容易产生梯度过时问题,从而影响算法的整体收敛性.现有解决方案大部分针对的是各节点性能差别较小的高性能计算(high performance computing, HPC)环境,很少有研究考虑过各节点性能差别较大的集群环境.针对上述问题进行研究并提出了一种基于性能感知技术的动态batch size随机梯度下降算法(dynamic batch size SGD, DBS-SGD).该算法通过分析各节点的计算能力,对各节点的minibatch进行动态分配,从而保证了节点间每次迭代更新的时间基本一致,进而降低了节点的平均梯度过时值.提出的算法能够有效优化异步更新策略中存在的梯度过时问题.选用常用的图像分类基准Mnist和cifar10作为训练数据集,将该算法与异步随机梯度下降(asynchronous SGD, ASGD)算法、n-soft算法进行了对比.实验结果表明:在不损失加速比的情况下,Mnist数据集的loss函数值降低了60%,cifar数据集的准确率提升了约10%,loss函数值降低了10%,其性能高于ASGD算法和n-soft算法,接近同步策略下的收敛曲线.

       

      Abstract: By increasing the depth of neural network and the size of datasets, the deep neural networks are currently widely used for many artificial intelligence applications including computer vision, speech recognition and natural language processing. It can deliver the state of the art accuracy on these tasks. However, these operations will increase the overhead of training process in deep neural network algorithm. Stochastic gradient descent (SGD) has been used for accelerating the training of deep neural networks in a distributed computing environment. Nevertheless, parallel SGD can easily generate the problem of stale gradient, which affects the overall convergence. Most of the existing solutions are suit for high performance computing (HPC) environment where the performance of each node is similar. Few studies have researched cluster environment where the performance of each node is quite different. This paper proposes a variant of ASGD (asynchronous SGD) algorithm in which the batch size is modulated according to the runtime performance of each node. Experimental verification is performed on commonly-used image classification benchmarks: Mnist and cifar10 to demonstrate the effectiveness of the approach. Compared with ASGD and n-soft, the loss function of Mnist is reduced by 60% and the accuracy of the cifar10 is increased about 10% without reducing the speed-up.

       

    /

    返回文章
    返回