Accelerating the convergence speed and avoiding the local optimal solution are two main goals of particle swarm optimization(PSO). The very basic PSO model and some variants of PSO do not consider the enhancement of...Accelerating the convergence speed and avoiding the local optimal solution are two main goals of particle swarm optimization(PSO). The very basic PSO model and some variants of PSO do not consider the enhancement of the explorative capability of each particle. Thus these methods have a slow convergence speed and may trap into a local optimal solution. To enhance the explorative capability of particles, a scheme called explorative capability enhancement in PSO(ECE-PSO) is proposed by introducing some virtual particles in random directions with random amplitude. The linearly decreasing method related to the maximum iteration and the nonlinearly decreasing method related to the fitness value of the globally best particle are employed to produce virtual particles. The above two methods are thoroughly compared with four representative advanced PSO variants on eight unimodal and multimodal benchmark problems. Experimental results indicate that the convergence speed and solution quality of ECE-PSO outperform the state-of-the-art PSO variants.展开更多
A simplex particle swarm optimization(simplex-PSO) derived from the Nelder-Mead simplex method was proposed to optimize the high dimensionality functions.In simplex-PSO,the velocity term was abandoned and its referenc...A simplex particle swarm optimization(simplex-PSO) derived from the Nelder-Mead simplex method was proposed to optimize the high dimensionality functions.In simplex-PSO,the velocity term was abandoned and its reference objectives were the best particle and the centroid of all particles except the best particle.The convergence theorems of linear time-varying discrete system proved that simplex-PSO is of consistent asymptotic convergence.In order to reduce the probability of trapping into a local optimal value,an extremum mutation was introduced into simplex-PSO and simplex-PSO-t(simplex-PSO with turbulence) was devised.Several experiments were carried out to verify the validity of simplex-PSO and simplex-PSO-t,and the experimental results confirmed the conclusions:(1) simplex-PSO-t can optimize high-dimension functions with 200-dimensionality;(2) compared PSO with chaos PSO(CPSO),the best optimum index increases by a factor of 1×102-1×104.展开更多
BP is a commonly used neural network training method, which has some disadvantages, such as local minima, sensitivity of initial value of weights, total dependence on gradient information. This paper presents some met...BP is a commonly used neural network training method, which has some disadvantages, such as local minima, sensitivity of initial value of weights, total dependence on gradient information. This paper presents some methods to train a neural network, including standard particle swarm optimizer (PSO), guaranteed convergence particle swarm optimizer (GCPSO), an improved PSO algorithm, and GCPSO-BP, an algorithm combined GCPSO with BP. The simulation results demonstrate the effectiveness of the three algorithms for neural network training.展开更多
为解决现有粒子群改进策略无法帮助已陷入局部最优和过早收敛的粒子恢复寻优性能的问题,提出一种陷阱标记联合懒蚂蚁的自适应粒子群优化(adaptive particle swarm optimization based on trap label and lazy ant, TLLA-APSO)算法。陷...为解决现有粒子群改进策略无法帮助已陷入局部最优和过早收敛的粒子恢复寻优性能的问题,提出一种陷阱标记联合懒蚂蚁的自适应粒子群优化(adaptive particle swarm optimization based on trap label and lazy ant, TLLA-APSO)算法。陷阱标记策略为粒子群提供动态速度增量,使其摆脱最优解的束缚。利用懒蚂蚁寻优策略多样化粒子速度,提升种群多样性。通过惯性认知策略在速度更新中引入历史位置,增加粒子的路径多样性和提升粒子的探索性能,使粒子更有效地避免陷入新的局部最优。理论证明了引入历史位置的粒子群算法的收敛性。仿真实验结果表明,所提算法不仅能有效解决粒子群已陷入局部最优和过早收敛的问题,且与其他算法相比,具有较快的收敛速度和较高的寻优精度。展开更多
基金supported by the Aeronautical Science Fund of Shaanxi Province of China(20145596025)
文摘Accelerating the convergence speed and avoiding the local optimal solution are two main goals of particle swarm optimization(PSO). The very basic PSO model and some variants of PSO do not consider the enhancement of the explorative capability of each particle. Thus these methods have a slow convergence speed and may trap into a local optimal solution. To enhance the explorative capability of particles, a scheme called explorative capability enhancement in PSO(ECE-PSO) is proposed by introducing some virtual particles in random directions with random amplitude. The linearly decreasing method related to the maximum iteration and the nonlinearly decreasing method related to the fitness value of the globally best particle are employed to produce virtual particles. The above two methods are thoroughly compared with four representative advanced PSO variants on eight unimodal and multimodal benchmark problems. Experimental results indicate that the convergence speed and solution quality of ECE-PSO outperform the state-of-the-art PSO variants.
基金Project(50275150) supported by the National Natural Science Foundation of ChinaProject(20070533131) supported by Research Fund for the Doctoral Program of Higher Education of China
文摘A simplex particle swarm optimization(simplex-PSO) derived from the Nelder-Mead simplex method was proposed to optimize the high dimensionality functions.In simplex-PSO,the velocity term was abandoned and its reference objectives were the best particle and the centroid of all particles except the best particle.The convergence theorems of linear time-varying discrete system proved that simplex-PSO is of consistent asymptotic convergence.In order to reduce the probability of trapping into a local optimal value,an extremum mutation was introduced into simplex-PSO and simplex-PSO-t(simplex-PSO with turbulence) was devised.Several experiments were carried out to verify the validity of simplex-PSO and simplex-PSO-t,and the experimental results confirmed the conclusions:(1) simplex-PSO-t can optimize high-dimension functions with 200-dimensionality;(2) compared PSO with chaos PSO(CPSO),the best optimum index increases by a factor of 1×102-1×104.
文摘BP is a commonly used neural network training method, which has some disadvantages, such as local minima, sensitivity of initial value of weights, total dependence on gradient information. This paper presents some methods to train a neural network, including standard particle swarm optimizer (PSO), guaranteed convergence particle swarm optimizer (GCPSO), an improved PSO algorithm, and GCPSO-BP, an algorithm combined GCPSO with BP. The simulation results demonstrate the effectiveness of the three algorithms for neural network training.
文摘为解决现有粒子群改进策略无法帮助已陷入局部最优和过早收敛的粒子恢复寻优性能的问题,提出一种陷阱标记联合懒蚂蚁的自适应粒子群优化(adaptive particle swarm optimization based on trap label and lazy ant, TLLA-APSO)算法。陷阱标记策略为粒子群提供动态速度增量,使其摆脱最优解的束缚。利用懒蚂蚁寻优策略多样化粒子速度,提升种群多样性。通过惯性认知策略在速度更新中引入历史位置,增加粒子的路径多样性和提升粒子的探索性能,使粒子更有效地避免陷入新的局部最优。理论证明了引入历史位置的粒子群算法的收敛性。仿真实验结果表明,所提算法不仅能有效解决粒子群已陷入局部最优和过早收敛的问题,且与其他算法相比,具有较快的收敛速度和较高的寻优精度。