Abstract:
L1 regularization plays a crucial role in sparse learning and higher accuracy can be achieved by using the capped-L1 regularization. However, it leads to a non-convex optimization problem, which is currently solved by a multi-stage convex relaxation algorithm (MSCR). As a convex optimization problem has to be solved exactly at each stage, extra computational cost can not be avoided in MSCR. In order to circumvent this drawback, in this paper, a new non-convex coordinate descent algorithm (Non-convex CD) is presented for solving the capped-L1 regularization problems, in which only one single-step of regular coordinate descent is employed in each stage of MSCR. Such a Non-convex CD can effectively reduce the computational cost. Theoretical analysis shows convergence of the proposed algorithm. For the commonly Lasso problems, the experiments on large-scale text databases demonstrate that Non-convex CD spends even less CPU time than the regular coordinate descent method, while keeping almost the same accuracy as MSCR. To further illustrate the high performance of Non-convex CD in real application, we also consider an image deblurring example.