Abstract:
Sequential minimal optimization (SMO), as a popular effective approach to train the support vector machine for large data set has some drawbacks. Since during every iteration it selects the two samples violating KKT conditions most with the help of random function to train support vector machine, the randomness makes it unable to converge steadily. Based on the new analytical method proposed before, the which incorporates multiple Lagrange multipliers to optimize support vector machine, a new algorithm MLSVM4 with multiplier 4 is proposed without the help of random function. Because it can more accurately select the samples used during the iteration, it can converge much faster than the other methods proposed before, especially in the case of support vector machine with linear kernel. Experiment on a large range of standard data sets, such as Adult, Web and handwriting digital data, shows that MLSVM4 performs better with the factor of 3 to 42 times than SMO methods.