ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2015, Vol. 52 ›› Issue (5): 1098-1108.doi: 10.7544/issn1000-1239.2015.20131492

Previous Articles     Next Articles

Parallel Support Vector Machine Training with Hybrid Programming Model

Li Tao, Liu Xuechen, Zhang Shuai, Wang Kai, Yang Yulu   

  1. (Department of Computer Science and Information Security, Nankai University, Tianjin 300071)
  • Online:2015-05-01

Abstract: Support vector machine (SVM) is a supervised method that is widely used in statistical classification and regression analysis. The interior point method (IPM) based SVM training is prominent in the low memory space and the fast convergence. However, it is still confronted with the challenges of training speed and storage space with the increasing size of training dataset. In this paper, the hybrid parallel SVM training mechanism is proposed to alleviate these problems on the CPU-GPU heterogeneous system. Firstly, the computing intensive operation in IPM algorithm is implemented with compute unified device architecture (CUDA). Then the IPM based SVM training algorithm is modified and implemented using cuBLAS library to further improve the training speed. Secondly, the modified IPM based SVM training algorithm is implemented with message passing interface (MPI) and CUDA hybrid programming model on a four-node cluster system. The training time and memory requirement are both reduced at the same time. Finally, the limitation of GPU device memory is eliminated based on the page-locked host memory supported by Fermi architecture. The large datasets are trained efficiently with the size larger than what the GPU memory allows. The results show that the hybrid parallel SVM training mechanism achieves more than 4 times speedup with MPI and CUDA hybrid programming model, and breaks away the GPU device memory limitation with the page-locked host memory based data storage strategy for large-scale SVM training.

Key words: support vector machine (SVM) training, compute unified device architecture (CUDA), message passing interface (MPI), page-locked host memory, CPU-GPU heterogeneous system

CLC Number: