期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Dynamic weighted voting for multiple classifier fusion:a generalized rough set method 被引量:9
1
作者 Sun Liang Han Chongzhao 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2006年第3期487-494,共8页
To improve the performance of multiple classifier system, a knowledge discovery based dynamic weighted voting (KD-DWV) is proposed based on knowledge discovery. In the method, all base classifiers may be allowed to ... To improve the performance of multiple classifier system, a knowledge discovery based dynamic weighted voting (KD-DWV) is proposed based on knowledge discovery. In the method, all base classifiers may be allowed to operate in different measurement/feature spaces to make the most of diverse classification information. The weights assigned to each output of a base classifier are estimated by the separability of training sample sets in relevant feature space. For this purpose, some decision tables (DTs) are established in terms of the diverse feature sets. And then the uncertainty measures of the separability are induced, in the form of mass functions in Dempster-Shafer theory (DST), from each DTs based on generalized rough set model. From the mass functions, all the weights are calculated by a modified heuristic fusion function and assigned dynamically to each classifier varying with its output. The comparison experiment is performed on the hyperspectral remote sensing images. And the experimental results show that the performance of the classification can be improved by using the proposed method compared with the plurality voting (PV). 展开更多
关键词 multiple classifier fusion dynamic weighted voting generalized rough set hyperspectral.
在线阅读 下载PDF
Novel ensemble learning based on multiple section distribution in distributed environment
2
作者 Fang Min 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2008年第2期377-380,共4页
Because most ensemble learning algorithms use the centralized model, and the training instances must be centralized on a single station, it is difficult to centralize the training data on a station. A distributed ense... Because most ensemble learning algorithms use the centralized model, and the training instances must be centralized on a single station, it is difficult to centralize the training data on a station. A distributed ensemble learning algorithm is proposed which has two kinds of weight genes of instances that denote the global distribution and the local distribution. Instead of the repeated sampling method in the standard ensemble learning, non-balance sampling from each station is used to train the base classifier set of each station. The concept of the effective nearby region for local integration classifier is proposed, and is used for the dynamic integration method of multiple classifiers in distributed environment. The experiments show that the ensemble learning algorithm in distributed environment proposed could reduce the time of training the base classifiers effectively, and ensure the classify performance is as same as the centralized learning method. 展开更多
关键词 distributed environment ensemble learning multiple classifiers combination.
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部