高级检索

    多拉格朗日乘子协同优化的SVM快速学习算法研究

    SVM Fast Training Algorithm Research Based on Multi-Lagrange Multiplier

    • 摘要: 提出了一个利用多个拉格朗日乘子协同优化的支持向量机快速学习方法(MLSVM),并给出了每个乘子的可行域范围的定义公式,由于在每个乘子的优化过程中使用了解析表达式,使得算法可以更加精确和快速地逼近最优解,可以证明SMO算法是该方法的一个特例.在此方法的理论指导下,根据不同的学习策略,程序实现了3种不同的具体算法(MLSVM1,MLSVM2,MLSVM3),其中前两个算法在数据集不大时(<5000条记录)学习速度与SMO算法相当,但当数据集更大时,算法就失效了.MLSVM3是一个改进算法,总结了MLSVM1和MLSVM2失效的原因,对SMO算法中学习效率较低的部分进行了改进,在多个数据集上测试,MLSVM3算法速度超过了SMO算法7.4%~4130%.

       

      Abstract: A multi-Lagrange multiplier support vector machine fast training method (MLSVM) based on the coordinated optimization of multi-Lagrange multipliers is proposed and the formula to define the feasible field of each multiplier is presented. The algorithm approaches to the most optimization more precisely and quickly due to the analytic expressions adopted in the optimization process of each multiplier. The SMO algorithm is proved to be an instance of MLSVM. Three individual lgorithms, i.e., MLSVM1, MLSVM2 and MLSVM3, are presented under the theoretical guidance of this method according to different learning strategies. The learning speed of MLSVM1 and MLSVM2 is about the same as that of SMO when the test data set is small (<5000). However, they will fail when the test data set becomes larger. MLSVM3 is an improved algorithm of the former two algorithms and the SMO algorithm. It not only overcomes the failure of MLSVM1 and MLSVM2, but also performs faster than the SMO algorithm with an improvement of 7.4% to 4130% on several test data sets.

       

    /

    返回文章
    返回