A Newton learning method for a neural network of multilayer perceptrons is proposed in this paper. Furthermore, a hybrid learning method id legitimately developed in combination of the backpropagation method proposed ...A Newton learning method for a neural network of multilayer perceptrons is proposed in this paper. Furthermore, a hybrid learning method id legitimately developed in combination of the backpropagation method proposed by Rumelhart et al with the Newton learning method. Finally, the hybrid learning algorithm is compared with the backpropagation algorithm by some illustrations, and the results show that this hybrid leaming algorithm bas the characteristics of rapid convergence.展开更多
Newton's learning algorithm of NN is presented and realized. In theory, the convergence rate of learning algorithm of NN based on Newton's method must be faster than BP's and other learning algorithms, because the ...Newton's learning algorithm of NN is presented and realized. In theory, the convergence rate of learning algorithm of NN based on Newton's method must be faster than BP's and other learning algorithms, because the gradient method is linearly convergent while Newton's method has second order convergence rate. The fast computing algorithm of Hesse matrix of the cost function of NN is proposed and it is the theory basis of the improvement of Newton's learning algorithm. Simulation results show that the convergence rate of Newton's learning algorithm is high and apparently faster than the traditional BP method's, and the robustness of Newton's learning algorithm is also better than BP method' s.展开更多
Kernel-based methods work by embedding the data into a feature space and then searching linear hypothesis among the embedding data points. The performance is mostly affected by which kernel is used. A promising way is...Kernel-based methods work by embedding the data into a feature space and then searching linear hypothesis among the embedding data points. The performance is mostly affected by which kernel is used. A promising way is to learn the kernel from the data automatically. A general regularized risk functional (RRF) criterion for kernel matrix learning is proposed. Compared with the RRF criterion, general RRF criterion takes into account the geometric distributions of the embedding data points. It is proven that the distance between different geometric distdbutions can be estimated by their centroid distance in the reproducing kernel Hilbert space. Using this criterion for kernel matrix learning leads to a convex quadratically constrained quadratic programming (QCQP) problem. For several commonly used loss functions, their mathematical formulations are given. Experiment results on a collection of benchmark data sets demonstrate the effectiveness of the proposed method.展开更多
Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.Wit...Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.With the increase of the nodes in the hidden layers,the computation cost is greatly increased.In this paper,we propose a novel algorithm,named constrained voting extreme learning machine(CV-ELM).Compared with the traditional ELM,the CV-ELM determines the input weight and bias based on the differences of between-class samples.At the same time,to improve the accuracy of the proposed method,the voting selection is introduced.The proposed method is evaluated on public benchmark datasets.The experimental results show that the proposed algorithm is superior to the original ELM algorithm.Further,we apply the CV-ELM to the classification of superheat degree(SD)state in the aluminum electrolysis industry,and the recognition accuracy rate reaches87.4%,and the experimental results demonstrate that the proposed method is more robust than the existing state-of-the-art identification methods.展开更多
Sparse representation has attracted extensive attention and performed well on image super-resolution(SR) in the last decade. However, many current image SR methods face the contradiction of detail recovery and artif...Sparse representation has attracted extensive attention and performed well on image super-resolution(SR) in the last decade. However, many current image SR methods face the contradiction of detail recovery and artifact suppression. We propose a multi-resolution dictionary learning(MRDL) model to solve this contradiction, and give a fast single image SR method based on the MRDL model. To obtain the MRDL model, we first extract multi-scale patches by using our proposed adaptive patch partition method(APPM). The APPM divides images into patches of different sizes according to their detail richness. Then, the multiresolution dictionary pairs, which contain structural primitives of various resolutions, can be trained from these multi-scale patches.Owing to the MRDL strategy, our SR algorithm not only recovers details well, with less jag and noise, but also significantly improves the computational efficiency. Experimental results validate that our algorithm performs better than other SR methods in evaluation metrics and visual perception.展开更多
An iterative learning control algorithm based on shifted Legendre orthogonal polynomials is proposed to address the terminal control problem of linear time-varying systems. First, the method parameterizes a linear tim...An iterative learning control algorithm based on shifted Legendre orthogonal polynomials is proposed to address the terminal control problem of linear time-varying systems. First, the method parameterizes a linear time-varying system by using shifted Legendre polynomials approximation. Then, an approximated model for the linear time-varying system is deduced by employing the orthogonality relations and boundary values of shifted Legendre polynomials. Based on the model, the shifted Legendre polynomials coefficients of control function are iteratively adjusted by an optimal iterative learning law derived. The algorithm presented can avoid solving the state transfer matrix of linear time-varying systems. Simulation results illustrate the effectiveness of the proposed method.展开更多
输电线路的智能巡检视觉任务对电力系统的安全稳定至关重要。尽管深度学习网络在分布一致的训练和测试数据集上表现良好,但实际应用中数据分布的偏差常常会降低模型性能。为了解决这一问题,提出一种基于对比学习的训练方法(TMCL),旨在...输电线路的智能巡检视觉任务对电力系统的安全稳定至关重要。尽管深度学习网络在分布一致的训练和测试数据集上表现良好,但实际应用中数据分布的偏差常常会降低模型性能。为了解决这一问题,提出一种基于对比学习的训练方法(TMCL),旨在增强模型鲁棒性。首先,构建专为输电线路场景设计的基准测试集TLD-C(Transmission Line Dataset-Corruption)用于评估模型在面对图像损坏时的鲁棒性;其次,通过构建对类别特征敏感的正负样本对,提升模型对不同类别特征的区分能力;然后,使用结合对比损失和交叉熵损失的联合优化策略对特征提取过程施加额外约束,以优化特征向量的表征;最后,引入非局部特征去噪网络(NFD)用于提取与类别密切相关的特征。实验结果表明,模型改进后的训练方法在输电线路数据集(TLD)上的平均精度比原始方法高出3.40个百分点,在TLD-C数据集上的相对损坏精度(rCP)比原始方法高出4.69个百分点。展开更多
文摘A Newton learning method for a neural network of multilayer perceptrons is proposed in this paper. Furthermore, a hybrid learning method id legitimately developed in combination of the backpropagation method proposed by Rumelhart et al with the Newton learning method. Finally, the hybrid learning algorithm is compared with the backpropagation algorithm by some illustrations, and the results show that this hybrid leaming algorithm bas the characteristics of rapid convergence.
文摘Newton's learning algorithm of NN is presented and realized. In theory, the convergence rate of learning algorithm of NN based on Newton's method must be faster than BP's and other learning algorithms, because the gradient method is linearly convergent while Newton's method has second order convergence rate. The fast computing algorithm of Hesse matrix of the cost function of NN is proposed and it is the theory basis of the improvement of Newton's learning algorithm. Simulation results show that the convergence rate of Newton's learning algorithm is high and apparently faster than the traditional BP method's, and the robustness of Newton's learning algorithm is also better than BP method' s.
基金supported by the National Natural Science Fundation of China (60736021)the Joint Funds of NSFC-Guangdong Province(U0735003)
文摘Kernel-based methods work by embedding the data into a feature space and then searching linear hypothesis among the embedding data points. The performance is mostly affected by which kernel is used. A promising way is to learn the kernel from the data automatically. A general regularized risk functional (RRF) criterion for kernel matrix learning is proposed. Compared with the RRF criterion, general RRF criterion takes into account the geometric distributions of the embedding data points. It is proven that the distance between different geometric distdbutions can be estimated by their centroid distance in the reproducing kernel Hilbert space. Using this criterion for kernel matrix learning leads to a convex quadratically constrained quadratic programming (QCQP) problem. For several commonly used loss functions, their mathematical formulations are given. Experiment results on a collection of benchmark data sets demonstrate the effectiveness of the proposed method.
基金supported by the National Natural Science Foundation of China(6177340561751312)the Major Scientific and Technological Innovation Projects of Shandong Province(2019JZZY020123)。
文摘Extreme learning machine(ELM)has been proved to be an effective pattern classification and regression learning mechanism by researchers.However,its good performance is based on a large number of hidden layer nodes.With the increase of the nodes in the hidden layers,the computation cost is greatly increased.In this paper,we propose a novel algorithm,named constrained voting extreme learning machine(CV-ELM).Compared with the traditional ELM,the CV-ELM determines the input weight and bias based on the differences of between-class samples.At the same time,to improve the accuracy of the proposed method,the voting selection is introduced.The proposed method is evaluated on public benchmark datasets.The experimental results show that the proposed algorithm is superior to the original ELM algorithm.Further,we apply the CV-ELM to the classification of superheat degree(SD)state in the aluminum electrolysis industry,and the recognition accuracy rate reaches87.4%,and the experimental results demonstrate that the proposed method is more robust than the existing state-of-the-art identification methods.
文摘Sparse representation has attracted extensive attention and performed well on image super-resolution(SR) in the last decade. However, many current image SR methods face the contradiction of detail recovery and artifact suppression. We propose a multi-resolution dictionary learning(MRDL) model to solve this contradiction, and give a fast single image SR method based on the MRDL model. To obtain the MRDL model, we first extract multi-scale patches by using our proposed adaptive patch partition method(APPM). The APPM divides images into patches of different sizes according to their detail richness. Then, the multiresolution dictionary pairs, which contain structural primitives of various resolutions, can be trained from these multi-scale patches.Owing to the MRDL strategy, our SR algorithm not only recovers details well, with less jag and noise, but also significantly improves the computational efficiency. Experimental results validate that our algorithm performs better than other SR methods in evaluation metrics and visual perception.
基金Supported by National Natural Science Foundation of P. R. China (60474049)
文摘An iterative learning control algorithm based on shifted Legendre orthogonal polynomials is proposed to address the terminal control problem of linear time-varying systems. First, the method parameterizes a linear time-varying system by using shifted Legendre polynomials approximation. Then, an approximated model for the linear time-varying system is deduced by employing the orthogonality relations and boundary values of shifted Legendre polynomials. Based on the model, the shifted Legendre polynomials coefficients of control function are iteratively adjusted by an optimal iterative learning law derived. The algorithm presented can avoid solving the state transfer matrix of linear time-varying systems. Simulation results illustrate the effectiveness of the proposed method.
文摘输电线路的智能巡检视觉任务对电力系统的安全稳定至关重要。尽管深度学习网络在分布一致的训练和测试数据集上表现良好,但实际应用中数据分布的偏差常常会降低模型性能。为了解决这一问题,提出一种基于对比学习的训练方法(TMCL),旨在增强模型鲁棒性。首先,构建专为输电线路场景设计的基准测试集TLD-C(Transmission Line Dataset-Corruption)用于评估模型在面对图像损坏时的鲁棒性;其次,通过构建对类别特征敏感的正负样本对,提升模型对不同类别特征的区分能力;然后,使用结合对比损失和交叉熵损失的联合优化策略对特征提取过程施加额外约束,以优化特征向量的表征;最后,引入非局部特征去噪网络(NFD)用于提取与类别密切相关的特征。实验结果表明,模型改进后的训练方法在输电线路数据集(TLD)上的平均精度比原始方法高出3.40个百分点,在TLD-C数据集上的相对损坏精度(rCP)比原始方法高出4.69个百分点。