高级检索

    MLSVM4——一种多乘子协同优化的SVM快速学习算法

    MLSVM4—An SVM Fast Training Algorithm Based on Multi-Lagrange Multiplier

    • 摘要: 贯序最小优化(SMO)算法是解决大数据集支持向量机学习问题的一种有效方法,但SMO选择工作集的策略是选择数据集中最违背KKT条件的两个样本,而且还使用了随机函数,使得优化过程具有很大的随机性,影响了学习效率.在多拉格朗日乘子协同优化的通用公式基础上,吸收了Keerthi所提出的SMO修改算法中双阈值的优点,给出了乘子数为4时的一个算法MLSVM4,由于能更加精确地确定待优化样本的拉格朗日乘子值,使得学习收敛速度大大提高,特别是在使用线性核的场合下效果更加明显,在Adult、Web、手写体数字数据集上的实验结果表明,MLSVM4算法速度超过了SMO算法3到42倍.

       

      Abstract: Sequential minimal optimization (SMO), as a popular effective approach to train the support vector machine for large data set has some drawbacks. Since during every iteration it selects the two samples violating KKT conditions most with the help of random function to train support vector machine, the randomness makes it unable to converge steadily. Based on the new analytical method proposed before, the which incorporates multiple Lagrange multipliers to optimize support vector machine, a new algorithm MLSVM4 with multiplier 4 is proposed without the help of random function. Because it can more accurately select the samples used during the iteration, it can converge much faster than the other methods proposed before, especially in the case of support vector machine with linear kernel. Experiment on a large range of standard data sets, such as Adult, Web and handwriting digital data, shows that MLSVM4 performs better with the factor of 3 to 42 times than SMO methods.

       

    /

    返回文章
    返回