The computational cost of support vector regression in the training phase is O (N^3), which is very expensive for a large scale problem. In addition, the solution of support vector regression is of parsimoniousness,...The computational cost of support vector regression in the training phase is O (N^3), which is very expensive for a large scale problem. In addition, the solution of support vector regression is of parsimoniousness, which has relation to a part of the whole training data set. Hence, it is reasonable to reduce the training data set. Aiming at the scheme based on k-nearest neighbors to reduce the training data set with the computational complexity O (kMN^2), an improved scheme is proposed to accelerate the reducing phase, which cuts down the computational complexity from O (kMN^2) to O (MN^2). Finally, experimental results on benchmark data sets validate the effectiveness of the improved scheme.展开更多
基金supported by the National Natural Science Foundation of China(50576033).
文摘The computational cost of support vector regression in the training phase is O (N^3), which is very expensive for a large scale problem. In addition, the solution of support vector regression is of parsimoniousness, which has relation to a part of the whole training data set. Hence, it is reasonable to reduce the training data set. Aiming at the scheme based on k-nearest neighbors to reduce the training data set with the computational complexity O (kMN^2), an improved scheme is proposed to accelerate the reducing phase, which cuts down the computational complexity from O (kMN^2) to O (MN^2). Finally, experimental results on benchmark data sets validate the effectiveness of the improved scheme.