A new incremental support vector machine (SVM) algorithm is proposed which is based on multiple kernel learning. Through introducing multiple kernel learning into the SVM incremental learning, large scale data set l...A new incremental support vector machine (SVM) algorithm is proposed which is based on multiple kernel learning. Through introducing multiple kernel learning into the SVM incremental learning, large scale data set learning problem can be solved effectively. Furthermore, different punishments are adopted in allusion to the training subset and the acquired support vectors, which may help to improve the performance of SVM. Simulation results indicate that the proposed algorithm can not only solve the model selection problem in SVM incremental learning, but also improve the classification or prediction precision.展开更多
In order to handle the semi-supervised problem quickly and efficiently in the twin support vector machine (TWSVM) field, a semi-supervised twin support vector machine (S2TSVM) is proposed by adding the original unlabe...In order to handle the semi-supervised problem quickly and efficiently in the twin support vector machine (TWSVM) field, a semi-supervised twin support vector machine (S2TSVM) is proposed by adding the original unlabeled samples. In S2TSVM, the addition of unlabeled samples can easily cause the classification hyper plane to deviate from the sample points. Then a centerdistance principle is proposed to pre-classify unlabeled samples, and a pre-classified S2TSVM (PS2TSVM) is proposed. Compared with S2TSVM, PS2TSVM not only improves the problem of the samples deviating from the classification hyper plane, but also improves the training speed. Then PS2TSVM is smoothed. After smoothing the model, the pre-classified smooth S2TSVM (PS3TSVM) is obtained, and its convergence is deduced. Finally, nine datasets are selected in the UCI machine learning database for comparison with other types of semi-supervised models. The experimental results show that the proposed PS3TSVM model has better classification results.展开更多
Extreme learning machine(ELM) has attracted much attention in recent years due to its fast convergence and good performance.Merging both ELM and support vector machine is an important trend,thus yielding an ELM kernel...Extreme learning machine(ELM) has attracted much attention in recent years due to its fast convergence and good performance.Merging both ELM and support vector machine is an important trend,thus yielding an ELM kernel.ELM kernel based methods are able to solve the nonlinear problems by inducing an explicit mapping compared with the commonly-used kernels such as Gaussian kernel.In this paper,the ELM kernel is extended to the least squares support vector regression(LSSVR),so ELM-LSSVR was proposed.ELM-LSSVR can be used to reduce the training and test time simultaneously without extra techniques such as sequential minimal optimization and pruning mechanism.Moreover,the memory space for the training and test was relieved.To confirm the efficacy and feasibility of the proposed ELM-LSSVR,the experiments are reported to demonstrate that ELM-LSSVR takes the advantage of training and test time with comparable accuracy to other algorithms.展开更多
A support vector machine with guadratic polynomial kernel function based nonlinear model multi-step-ahead optimizing predictive controller was presented. A support vector machine based predictive model was established...A support vector machine with guadratic polynomial kernel function based nonlinear model multi-step-ahead optimizing predictive controller was presented. A support vector machine based predictive model was established by black-box identification. And a quadratic objective function with receding horizon was selected to obtain the controller output. By solving a nonlinear optimization problem with equality constraint of model output and boundary constraint of controller output using Nelder-Mead simplex direct search method, a sub-optimal control law was achieved in feature space. The effect of the controller was demonstrated on a recognized benchmark problem and a continuous-stirred tank reactor. The simulation results show that the multi-step-ahead predictive controller can be well applied to nonlinear system, with better performance in following reference trajectory and disturbance-rejection.展开更多
Support Vector Machine(SVM) was demonstrated as a potentially useful tool to integrate multi-variables and to produce a predictive map for mineral deposits. The e 1071,a free R package,was used to construct a SVM with...Support Vector Machine(SVM) was demonstrated as a potentially useful tool to integrate multi-variables and to produce a predictive map for mineral deposits. The e 1071,a free R package,was used to construct a SVM with radial kernel function to integrate four evidence layers and to map prospectivity for Gangdese porphyry copper deposits.The results demonstrate that the predicted prospective target area for Cu occupies 20.5%of the total study area and contains 52.4%of the total number of known porphyry copper deposits.The results obtained展开更多
Kernel-based methods work by embedding the data into a feature space and then searching linear hypothesis among the embedding data points. The performance is mostly affected by which kernel is used. A promising way is...Kernel-based methods work by embedding the data into a feature space and then searching linear hypothesis among the embedding data points. The performance is mostly affected by which kernel is used. A promising way is to learn the kernel from the data automatically. A general regularized risk functional (RRF) criterion for kernel matrix learning is proposed. Compared with the RRF criterion, general RRF criterion takes into account the geometric distributions of the embedding data points. It is proven that the distance between different geometric distdbutions can be estimated by their centroid distance in the reproducing kernel Hilbert space. Using this criterion for kernel matrix learning leads to a convex quadratically constrained quadratic programming (QCQP) problem. For several commonly used loss functions, their mathematical formulations are given. Experiment results on a collection of benchmark data sets demonstrate the effectiveness of the proposed method.展开更多
A hybrid feature selection and classification strategy was proposed based on the simulated annealing genetic algonthrn and multiple instance learning (MIL). The band selection method was proposed from subspace decom...A hybrid feature selection and classification strategy was proposed based on the simulated annealing genetic algonthrn and multiple instance learning (MIL). The band selection method was proposed from subspace decomposition, which combines the simulated annealing algorithm with the genetic algorithm in choosing different cross-over and mutation probabilities, as well as mutation individuals. Then MIL was combined with image segmentation, clustering and support vector machine algorithms to classify hyperspectral image. The experimental results show that this proposed method can get high classification accuracy of 93.13% at small training samples and the weaknesses of the conventional methods are overcome.展开更多
Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsi...Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsity.Therefore,it is difficult for LSPTSVM to process large-scale datasets with outliers.In this paper,we propose a robust LSPTSVM model(called R-LSPTSVM)by applying truncated least squares loss function.The robustness of R-LSPTSVM is proved from a weighted perspective.Furthermore,we obtain the sparse solution of R-LSPTSVM by using the pivoting Cholesky factorization method in primal space.Finally,the sparse R-LSPTSVM algorithm(SR-LSPTSVM)is proposed.Experimental results show that SR-LSPTSVM is insensitive to outliers and can deal with large-scale datasets fastly.展开更多
A support vector machine time series forecasting model based on rough set data preprocessing was proposed by combining rough set attribute reduction and support vector machine regression algorithm. First, remove the r...A support vector machine time series forecasting model based on rough set data preprocessing was proposed by combining rough set attribute reduction and support vector machine regression algorithm. First, remove the redundant attribute for forecasting from condition attribute by rough set method; then use the minimum condition attribute set obtained after the reduction and the corresponding initial data, reform a new training sample set which only retain the important attributes influencing the forecasting accuracy; study and train the support vector machine with the training sample obtained after reduction, and then input the reformed testing sample set according to the minimum condition attribute and corresponding initial data. The model was tested and the mapping relation was got between the condition attribute and forecasting variable. Eventually, power supply and demand were forecasted in this model. The average absolute error rates of power consumption of the whole society and yearly maximum load are respectively 14.21% and 13.23%. It shows that RS-SVM time series forecasting model has high forecasting accuracy.展开更多
基金supported by the National Natural Science Key Foundation of China(69974021)
文摘A new incremental support vector machine (SVM) algorithm is proposed which is based on multiple kernel learning. Through introducing multiple kernel learning into the SVM incremental learning, large scale data set learning problem can be solved effectively. Furthermore, different punishments are adopted in allusion to the training subset and the acquired support vectors, which may help to improve the performance of SVM. Simulation results indicate that the proposed algorithm can not only solve the model selection problem in SVM incremental learning, but also improve the classification or prediction precision.
基金supported by the Fundamental Research Funds for University of Science and Technology Beijing(FRF-BR-12-021)
文摘In order to handle the semi-supervised problem quickly and efficiently in the twin support vector machine (TWSVM) field, a semi-supervised twin support vector machine (S2TSVM) is proposed by adding the original unlabeled samples. In S2TSVM, the addition of unlabeled samples can easily cause the classification hyper plane to deviate from the sample points. Then a centerdistance principle is proposed to pre-classify unlabeled samples, and a pre-classified S2TSVM (PS2TSVM) is proposed. Compared with S2TSVM, PS2TSVM not only improves the problem of the samples deviating from the classification hyper plane, but also improves the training speed. Then PS2TSVM is smoothed. After smoothing the model, the pre-classified smooth S2TSVM (PS3TSVM) is obtained, and its convergence is deduced. Finally, nine datasets are selected in the UCI machine learning database for comparison with other types of semi-supervised models. The experimental results show that the proposed PS3TSVM model has better classification results.
基金Sponsored by the National Natural Science Foundation of China(51006052)
文摘Extreme learning machine(ELM) has attracted much attention in recent years due to its fast convergence and good performance.Merging both ELM and support vector machine is an important trend,thus yielding an ELM kernel.ELM kernel based methods are able to solve the nonlinear problems by inducing an explicit mapping compared with the commonly-used kernels such as Gaussian kernel.In this paper,the ELM kernel is extended to the least squares support vector regression(LSSVR),so ELM-LSSVR was proposed.ELM-LSSVR can be used to reduce the training and test time simultaneously without extra techniques such as sequential minimal optimization and pruning mechanism.Moreover,the memory space for the training and test was relieved.To confirm the efficacy and feasibility of the proposed ELM-LSSVR,the experiments are reported to demonstrate that ELM-LSSVR takes the advantage of training and test time with comparable accuracy to other algorithms.
文摘A support vector machine with guadratic polynomial kernel function based nonlinear model multi-step-ahead optimizing predictive controller was presented. A support vector machine based predictive model was established by black-box identification. And a quadratic objective function with receding horizon was selected to obtain the controller output. By solving a nonlinear optimization problem with equality constraint of model output and boundary constraint of controller output using Nelder-Mead simplex direct search method, a sub-optimal control law was achieved in feature space. The effect of the controller was demonstrated on a recognized benchmark problem and a continuous-stirred tank reactor. The simulation results show that the multi-step-ahead predictive controller can be well applied to nonlinear system, with better performance in following reference trajectory and disturbance-rejection.
文摘Support Vector Machine(SVM) was demonstrated as a potentially useful tool to integrate multi-variables and to produce a predictive map for mineral deposits. The e 1071,a free R package,was used to construct a SVM with radial kernel function to integrate four evidence layers and to map prospectivity for Gangdese porphyry copper deposits.The results demonstrate that the predicted prospective target area for Cu occupies 20.5%of the total study area and contains 52.4%of the total number of known porphyry copper deposits.The results obtained
基金supported by the National Natural Science Fundation of China (60736021)the Joint Funds of NSFC-Guangdong Province(U0735003)
文摘Kernel-based methods work by embedding the data into a feature space and then searching linear hypothesis among the embedding data points. The performance is mostly affected by which kernel is used. A promising way is to learn the kernel from the data automatically. A general regularized risk functional (RRF) criterion for kernel matrix learning is proposed. Compared with the RRF criterion, general RRF criterion takes into account the geometric distributions of the embedding data points. It is proven that the distance between different geometric distdbutions can be estimated by their centroid distance in the reproducing kernel Hilbert space. Using this criterion for kernel matrix learning leads to a convex quadratically constrained quadratic programming (QCQP) problem. For several commonly used loss functions, their mathematical formulations are given. Experiment results on a collection of benchmark data sets demonstrate the effectiveness of the proposed method.
文摘A hybrid feature selection and classification strategy was proposed based on the simulated annealing genetic algonthrn and multiple instance learning (MIL). The band selection method was proposed from subspace decomposition, which combines the simulated annealing algorithm with the genetic algorithm in choosing different cross-over and mutation probabilities, as well as mutation individuals. Then MIL was combined with image segmentation, clustering and support vector machine algorithms to classify hyperspectral image. The experimental results show that this proposed method can get high classification accuracy of 93.13% at small training samples and the weaknesses of the conventional methods are overcome.
基金supported by the National Natural Science Foundation of China(6177202062202433+4 种基金621723716227242262036010)the Natural Science Foundation of Henan Province(22100002)the Postdoctoral Research Grant in Henan Province(202103111)。
文摘Least squares projection twin support vector machine(LSPTSVM)has faster computing speed than classical least squares support vector machine(LSSVM).However,LSPTSVM is sensitive to outliers and its solution lacks sparsity.Therefore,it is difficult for LSPTSVM to process large-scale datasets with outliers.In this paper,we propose a robust LSPTSVM model(called R-LSPTSVM)by applying truncated least squares loss function.The robustness of R-LSPTSVM is proved from a weighted perspective.Furthermore,we obtain the sparse solution of R-LSPTSVM by using the pivoting Cholesky factorization method in primal space.Finally,the sparse R-LSPTSVM algorithm(SR-LSPTSVM)is proposed.Experimental results show that SR-LSPTSVM is insensitive to outliers and can deal with large-scale datasets fastly.
基金Project(70373017) supported by the National Natural Science Foundation of China
文摘A support vector machine time series forecasting model based on rough set data preprocessing was proposed by combining rough set attribute reduction and support vector machine regression algorithm. First, remove the redundant attribute for forecasting from condition attribute by rough set method; then use the minimum condition attribute set obtained after the reduction and the corresponding initial data, reform a new training sample set which only retain the important attributes influencing the forecasting accuracy; study and train the support vector machine with the training sample obtained after reduction, and then input the reformed testing sample set according to the minimum condition attribute and corresponding initial data. The model was tested and the mapping relation was got between the condition attribute and forecasting variable. Eventually, power supply and demand were forecasted in this model. The average absolute error rates of power consumption of the whole society and yearly maximum load are respectively 14.21% and 13.23%. It shows that RS-SVM time series forecasting model has high forecasting accuracy.