Abstract:
Current machine learning models require numbers of hyperparameters. Adjusting those hyperparameters is an exhausting job. Thus, hyperparameters optimization algorithms play important roles in machine learning application. In hyperparameters optimization algorithms, sequential model-based optimization algorithms (SMBO) and parallel SMBO algorithms are state-of-the-art hyperpara-meter optimization methods. However, (parallel) SMBO algorithms do not take the best hyperpara-meters high possibility range and gradients into considerasion. It is obvious that best hyperparameters high possibility range and hyperparameter gradients can accelerate traditional hyperparameters optimization algorithms. In this paper, we accelerate the traditional SMBO method and name our method as AccSMBO. In AccSMBO, we build a novel gradient-based multikernel Gaussian process. Our multikernel Gaussian process has a good generalization ability which reduces the gradient noise influence on SMBO algorithm. And we also design meta-acquisition function and parallel resource allocation plan which encourage that (parallel) SMBO puts more attention on the best hyperpara-meters high possibility range. In theory, our method ensures that all hyperparameter gradient information and the best hyperparameters high possibility range information are fully used. In L2 norm regularised logistic loss function experiments, on different scales datasets: small-scale dataset Pc4, middle-scale dataset Rcv1, large-scale dataset Real-sim, compared with state-of-the-art gradient based algorithm: HOAG and state-of-the-art SMBO algorithm: SMAC, our method exhibits the best performance.