高级检索
    杨晓伟, 路 节, 张广全. 一种高效的最小二乘支持向量机分类器剪枝算法[J]. 计算机研究与发展, 2007, 44(7): 1128-1136.
    引用本文: 杨晓伟, 路 节, 张广全. 一种高效的最小二乘支持向量机分类器剪枝算法[J]. 计算机研究与发展, 2007, 44(7): 1128-1136.
    Yang Xiaowei, Lu Jie, Zhang Guangquan. An Effective Pruning Algorithm for Least Squares Support Vector Machine Classifier[J]. Journal of Computer Research and Development, 2007, 44(7): 1128-1136.
    Citation: Yang Xiaowei, Lu Jie, Zhang Guangquan. An Effective Pruning Algorithm for Least Squares Support Vector Machine Classifier[J]. Journal of Computer Research and Development, 2007, 44(7): 1128-1136.

    一种高效的最小二乘支持向量机分类器剪枝算法

    An Effective Pruning Algorithm for Least Squares Support Vector Machine Classifier

    • 摘要: 针对最小二乘支持向量机丧失稀疏性的问题,提出了一种高效的剪枝算法.为了避免解初始的线性代数方程组,采用了一种自下而上的策略.在训练的过程中,根据一些特定的剪枝条件,块增量学习和逆学习交替进行,一个小的支持向量集能够自动形成.使用此集合,可以构造最终的分类器.为了测试新算法的有效性,把它应用于5个UCI数据集.实验结果表明:使用新的剪枝算法,当增量块的大小等于2时,在几乎不损失精度的情况下,可以得到稀疏解.另外,和SMO算法相比,新算法的速度更快.新的算法不仅适用于最小二乘支持向量机分类器,也可向最小二乘支持向量回归机推广.

       

      Abstract: A well-known drawback in the least squares support vector machine (LS-SVM) is that the sparseness is lost. In this study, an effective pruning algorithm is developed to deal with this problem. To avoid solving the primal set of linear equations, the bottom to the top strategy is adopted in the proposed algorithm. During the training process of the algorithm, the chunking incremental and decremental learning procedures are used alternately. A small support vector set, which can cover most of the information in the training set, can be formed adaptively. Using the support vector set, one can construct the final classifier. In order to test the validation of the proposed algorithm, it has been applied to five benchmarking UCI datasets. In order to show the relationships among the chunking size, the number of support vector machine, the training time, and the testing accuracy, different chunking sizes are tested. The experimental results show that the proposed algorithm can adaptively obtain the sparse solutions without almost losing generalization performance when the chunking size is equal to 2, and also its training speed is much faster than that of the sequential minimal optimization (SMO) algorithm. The proposed algorithm can also be applied to the least squares support vector regression machine as well as LS-SVM classifier.

       

    /

    返回文章
    返回