Abstract:
To solve uncertain classification problem, an SVM (support vector machine) is trained to behave like a Bayesian optimal classifier based on the training data. The idea is to weigh each unbalanced training sample by a posteriori probability. A whole framework of posteriori probability support vector machine (PPSVM) is presented and SVM is reformulated into PPSVM. The linear separability, margin, optimal hyperplane and soft margin algorithms are discussed. A new optimization problem is obtained and a new definition of support vector is given. In fact, PPSVM is motivated by statistical learning theory and is an extension of regular SVM. An empirical method is also proposed for determining the posteriori probability. Two artificial examples show that PPSVM formulation is reasonable if the class-conditional probability is known, and some real experiments demonstrate that the weighted data cases by some empirical methods can produce better results than regular SVM.