Abstract:
By increasing the depth of neural network and the size of datasets, the deep neural networks are currently widely used for many artificial intelligence applications including computer vision, speech recognition and natural language processing. It can deliver the state of the art accuracy on these tasks. However, these operations will increase the overhead of training process in deep neural network algorithm. Stochastic gradient descent (SGD) has been used for accelerating the training of deep neural networks in a distributed computing environment. Nevertheless, parallel SGD can easily generate the problem of stale gradient, which affects the overall convergence. Most of the existing solutions are suit for high performance computing (HPC) environment where the performance of each node is similar. Few studies have researched cluster environment where the performance of each node is quite different. This paper proposes a variant of ASGD (asynchronous SGD) algorithm in which the batch size is modulated according to the runtime performance of each node. Experimental verification is performed on commonly-used image classification benchmarks: Mnist and cifar10 to demonstrate the effectiveness of the approach. Compared with ASGD and n-soft, the loss function of Mnist is reduced by 60% and the accuracy of the cifar10 is increased about 10% without reducing the speed-up.