In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis ...In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.展开更多
As a production quality index of hematite grinding process,particle size(PS)is hard to be measured in real time.To achieve the PS estimation,this paper proposes a novel data driven model of PS using stochastic configu...As a production quality index of hematite grinding process,particle size(PS)is hard to be measured in real time.To achieve the PS estimation,this paper proposes a novel data driven model of PS using stochastic configuration network(SCN)with robust technique,namely,robust SCN(RSCN).Firstly,this paper proves the universal approximation property of RSCN with weighted least squares technique.Secondly,three robust algorithms are presented by employing M-estimation with Huber loss function,M-estimation with interquartile range(IQR)and nonparametric kernel density estimation(NKDE)function respectively to set the penalty weight.Comparison experiments are first carried out based on the UCI standard data sets to verify the effectiveness of these methods,and then the data-driven PS model based on the robust algorithms are established and verified.Experimental results show that the RSCN has an excellent performance for the PS estimation.展开更多
由于烧结过程中存在众多不确定性因素,使得机理分析和点预测结果的可靠性不足.基于此提出随机森林-极限树-核密度估计(random forest-extreme tree-kernel density estimation,RF-ET-KDE)算法对物理指标(粒度、水分)进行区间预测.首先,...由于烧结过程中存在众多不确定性因素,使得机理分析和点预测结果的可靠性不足.基于此提出随机森林-极限树-核密度估计(random forest-extreme tree-kernel density estimation,RF-ET-KDE)算法对物理指标(粒度、水分)进行区间预测.首先,采用数据预处理和特征选择操作筛选出最适合建模的特征变量.其次,使用基于Stacking的RF-ET算法对指标进行点预测,该算法使得模型有较高的准确性和泛化性.然后,采用KDE算法计算指标的预测误差,得到了一定置信水平下的分布区间和区间预测结果.最后,用所建模型与其余组合模型进行对比.结果表明,RF-ET算法有较高的点预测效果,KDE算法可以很好地量化指标的误差,可以得到较高可靠度的区间预测结果.展开更多
交叉熵法可显著加速电网可靠性评估,但往往聚焦于独立随机变量,若将其拓展至相关性变量可进一步提升加速性能。为有效获取相关性变量的重要抽样密度函数以实现其重要抽样,针对相关性建模中广泛使用的核密度估计模型(kernel density esti...交叉熵法可显著加速电网可靠性评估,但往往聚焦于独立随机变量,若将其拓展至相关性变量可进一步提升加速性能。为有效获取相关性变量的重要抽样密度函数以实现其重要抽样,针对相关性建模中广泛使用的核密度估计模型(kernel density estimation,KDE)开展了交叉熵优化研究。因KDE模型不属于指数分布家族,传统交叉熵优化难以实施,故利用复合抽样算法特点提出了新颖的直接交叉熵优化方法,推导出KDE模型最优权重参数的解析表达式。因权重参数数量级较小,直接优化易导致准确性退化,故基于子集模拟思想进一步提出间接交叉熵优化方法,将较小的权重参数优化转换成较大的条件概率优化,提升了优化准确性。通过MRTS79和MRTS96可靠性测试系统的评估分析,验证了所提方法在含相关性变量电网可靠性评估中的高效加速性能。展开更多
基金Project(61101185) supported by the National Natural Science Foundation of ChinaProject(2011AA1221) supported by the National High Technology Research and Development Program of China
文摘In order to improve the performance of the probability hypothesis density(PHD) algorithm based particle filter(PF) in terms of number estimation and states extraction of multiple targets, a new probability hypothesis density filter algorithm based on marginalized particle and kernel density estimation is proposed, which utilizes the idea of marginalized particle filter to enhance the estimating performance of the PHD. The state variables are decomposed into linear and non-linear parts. The particle filter is adopted to predict and estimate the nonlinear states of multi-target after dimensionality reduction, while the Kalman filter is applied to estimate the linear parts under linear Gaussian condition. Embedding the information of the linear states into the estimated nonlinear states helps to reduce the estimating variance and improve the accuracy of target number estimation. The meanshift kernel density estimation, being of the inherent nature of searching peak value via an adaptive gradient ascent iteration, is introduced to cluster particles and extract target states, which is independent of the target number and can converge to the local peak position of the PHD distribution while avoiding the errors due to the inaccuracy in modeling and parameters estimation. Experiments show that the proposed algorithm can obtain higher tracking accuracy when using fewer sampling particles and is of lower computational complexity compared with the PF-PHD.
基金Projects(61603393,61741318)supported in part by the National Natural Science Foundation of ChinaProject(BK20160275)supported by the Natural Science Foundation of Jiangsu Province,China+1 种基金Project(2015M581885)supported by the Postdoctoral Science Foundation of ChinaProject(PAL-N201706)supported by the Open Project Foundation of State Key Laboratory of Synthetical Automation for Process Industries of Northeastern University,China
文摘As a production quality index of hematite grinding process,particle size(PS)is hard to be measured in real time.To achieve the PS estimation,this paper proposes a novel data driven model of PS using stochastic configuration network(SCN)with robust technique,namely,robust SCN(RSCN).Firstly,this paper proves the universal approximation property of RSCN with weighted least squares technique.Secondly,three robust algorithms are presented by employing M-estimation with Huber loss function,M-estimation with interquartile range(IQR)and nonparametric kernel density estimation(NKDE)function respectively to set the penalty weight.Comparison experiments are first carried out based on the UCI standard data sets to verify the effectiveness of these methods,and then the data-driven PS model based on the robust algorithms are established and verified.Experimental results show that the RSCN has an excellent performance for the PS estimation.
文摘由于烧结过程中存在众多不确定性因素,使得机理分析和点预测结果的可靠性不足.基于此提出随机森林-极限树-核密度估计(random forest-extreme tree-kernel density estimation,RF-ET-KDE)算法对物理指标(粒度、水分)进行区间预测.首先,采用数据预处理和特征选择操作筛选出最适合建模的特征变量.其次,使用基于Stacking的RF-ET算法对指标进行点预测,该算法使得模型有较高的准确性和泛化性.然后,采用KDE算法计算指标的预测误差,得到了一定置信水平下的分布区间和区间预测结果.最后,用所建模型与其余组合模型进行对比.结果表明,RF-ET算法有较高的点预测效果,KDE算法可以很好地量化指标的误差,可以得到较高可靠度的区间预测结果.
文摘交叉熵法可显著加速电网可靠性评估,但往往聚焦于独立随机变量,若将其拓展至相关性变量可进一步提升加速性能。为有效获取相关性变量的重要抽样密度函数以实现其重要抽样,针对相关性建模中广泛使用的核密度估计模型(kernel density estimation,KDE)开展了交叉熵优化研究。因KDE模型不属于指数分布家族,传统交叉熵优化难以实施,故利用复合抽样算法特点提出了新颖的直接交叉熵优化方法,推导出KDE模型最优权重参数的解析表达式。因权重参数数量级较小,直接优化易导致准确性退化,故基于子集模拟思想进一步提出间接交叉熵优化方法,将较小的权重参数优化转换成较大的条件概率优化,提升了优化准确性。通过MRTS79和MRTS96可靠性测试系统的评估分析,验证了所提方法在含相关性变量电网可靠性评估中的高效加速性能。