To improve the performance of multiple classifier system, a knowledge discovery based dynamic weighted voting (KD-DWV) is proposed based on knowledge discovery. In the method, all base classifiers may be allowed to ...To improve the performance of multiple classifier system, a knowledge discovery based dynamic weighted voting (KD-DWV) is proposed based on knowledge discovery. In the method, all base classifiers may be allowed to operate in different measurement/feature spaces to make the most of diverse classification information. The weights assigned to each output of a base classifier are estimated by the separability of training sample sets in relevant feature space. For this purpose, some decision tables (DTs) are established in terms of the diverse feature sets. And then the uncertainty measures of the separability are induced, in the form of mass functions in Dempster-Shafer theory (DST), from each DTs based on generalized rough set model. From the mass functions, all the weights are calculated by a modified heuristic fusion function and assigned dynamically to each classifier varying with its output. The comparison experiment is performed on the hyperspectral remote sensing images. And the experimental results show that the performance of the classification can be improved by using the proposed method compared with the plurality voting (PV).展开更多
Because most ensemble learning algorithms use the centralized model, and the training instances must be centralized on a single station, it is difficult to centralize the training data on a station. A distributed ense...Because most ensemble learning algorithms use the centralized model, and the training instances must be centralized on a single station, it is difficult to centralize the training data on a station. A distributed ensemble learning algorithm is proposed which has two kinds of weight genes of instances that denote the global distribution and the local distribution. Instead of the repeated sampling method in the standard ensemble learning, non-balance sampling from each station is used to train the base classifier set of each station. The concept of the effective nearby region for local integration classifier is proposed, and is used for the dynamic integration method of multiple classifiers in distributed environment. The experiments show that the ensemble learning algorithm in distributed environment proposed could reduce the time of training the base classifiers effectively, and ensure the classify performance is as same as the centralized learning method.展开更多
基金This project was supported by the National Basic Research Programof China (2001CB309403)
文摘To improve the performance of multiple classifier system, a knowledge discovery based dynamic weighted voting (KD-DWV) is proposed based on knowledge discovery. In the method, all base classifiers may be allowed to operate in different measurement/feature spaces to make the most of diverse classification information. The weights assigned to each output of a base classifier are estimated by the separability of training sample sets in relevant feature space. For this purpose, some decision tables (DTs) are established in terms of the diverse feature sets. And then the uncertainty measures of the separability are induced, in the form of mass functions in Dempster-Shafer theory (DST), from each DTs based on generalized rough set model. From the mass functions, all the weights are calculated by a modified heuristic fusion function and assigned dynamically to each classifier varying with its output. The comparison experiment is performed on the hyperspectral remote sensing images. And the experimental results show that the performance of the classification can be improved by using the proposed method compared with the plurality voting (PV).
基金the Natural Science Foundation of Shaan’xi Province (2005F51).
文摘Because most ensemble learning algorithms use the centralized model, and the training instances must be centralized on a single station, it is difficult to centralize the training data on a station. A distributed ensemble learning algorithm is proposed which has two kinds of weight genes of instances that denote the global distribution and the local distribution. Instead of the repeated sampling method in the standard ensemble learning, non-balance sampling from each station is used to train the base classifier set of each station. The concept of the effective nearby region for local integration classifier is proposed, and is used for the dynamic integration method of multiple classifiers in distributed environment. The experiments show that the ensemble learning algorithm in distributed environment proposed could reduce the time of training the base classifiers effectively, and ensure the classify performance is as same as the centralized learning method.