Advanced Search
    Kong Kang, Tao Qing, Wang Qunshan, Chu Dejun. A Sub-Gadient Based Solver for L1-Rgularization+Hinge-Loss Problem[J]. Journal of Computer Research and Development, 2012, 49(7): 1494-1499.
    Citation: Kong Kang, Tao Qing, Wang Qunshan, Chu Dejun. A Sub-Gadient Based Solver for L1-Rgularization+Hinge-Loss Problem[J]. Journal of Computer Research and Development, 2012, 49(7): 1494-1499.

    A Sub-Gadient Based Solver for L1-Rgularization+Hinge-Loss Problem

    • Hinge loss is central to the success of support vector machines (SVM) in the area of machine learning. L1 regularization plays a crucial role in sparse learning, which is essentially important for large scale classification problems. However, both hinge loss and L1 regularization are non-differentiable, and higher order of gradient information is unavailable. In this paper, the optimization problem in the form of L1 regularization plus hinge loss is systematically investigated by using the sub-gradient method. We first describe algorithms for the direct sub-gradient method and the projected sub-gradient method in a stochastic setting. To confirm the algorithms' correctness, we conduct convergence analysis as well as convergence rate of the stochastic projected sub-gradient method. Experimental results on large scale text classification data demonstrate that the stochastic projected sub-gradient method has better convergence rate and high sparsity, and many of the elements in the weight vector are zero, when processing large scale sparse problems. Further, we also demonstrate how the project threshold affects the algorithms' sparsity.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return