Abstract:
With the advent of the era of big data, distributed machine learning has been widely applied to process massive data. The most commonly used one is the distributed stochastic gradient descent algorithm, but it is vulnerable to different types of Byzantine attacks. In order to maximize the elastic limit to defend against attacks and optimize objective function in the distributed dimensional Byzantine environment based on the gradient update rule, firstly a new Byzantine attack method—saddle point attack is proposed in this paper. Contrasting with the adaptive non-adaptive methods, the adaptation with dynamic bound escapes the saddle point fast when the objective function is stuck in the saddle point. The comparative experiment is made on the classification of data sets. Secondly, an aggregation rule Saddle(·) for filtering Byzantine agents is proposed, and it is proved that the rule is the dimensional Byzantine resilience. Therefore, in the distributed dimensional Byzantine environment, the adaptive optimization method with dynamic bound combined with the aggregation rule Saddle(·) can effectively defend against the saddle point attack. Finally, the error rate of the data set classification in the experimental results is compared to analyze the advantages and disadvantages of the adaptation with dynamic bound over the adaptive and non-adaptive methods. The result shows that the adaptation with dynamic bound combined with the aggregation rule Saddle(·) is less affected by the saddle point attack in the distributed dimensional Byzantine environment.