The Newton-Like algorithm with price estimation error in optimization flow control in network is analyzed. The estimation error is treated as inexactness of the gradient and the inexact descent direction is analyzed. ...The Newton-Like algorithm with price estimation error in optimization flow control in network is analyzed. The estimation error is treated as inexactness of the gradient and the inexact descent direction is analyzed. Based on the optimization theory, a sufficient condition for convergence of this algorithm with bounded price estimation error is obtained. Furthermore, even when this sufficient condition doesn't hold, this algorithm can also converge, provided a modified step size, and an attraction region is obtained. Based on Lasalle's invariance principle applied to a suitable Lyapunov function, the dynamic system described by this algorithm is proved to be global stability if the error is zero. And the Newton-Like algorithm with bounded price estimation error is also globally stable if the error satisfies the sufficient condition for convergence. All trajectories ultimately converge to the equilibrium point.展开更多
The asymptotic and stable properties of general stochastic functional differential equations are investigated by the multiple Lyapunov function method, which admits non-negative up-per bounds for the stochastic deriva...The asymptotic and stable properties of general stochastic functional differential equations are investigated by the multiple Lyapunov function method, which admits non-negative up-per bounds for the stochastic derivatives of the Lyapunov functions, a theorem for asymptotic properties of the LaSal e-type described by limit sets of the solutions of the equations is obtained. Based on the asymptotic properties to the limit set, a theorem of asymptotic stability of the stochastic functional differential equations is also established, which enables us to construct the Lyapunov functions more easily in application. Particularly, the wel-known classical theorem on stochastic stability is a special case of our result, the operator LV is not required to be negative which is more general to fulfil and the stochastic perturbation plays an important role in it. These show clearly the improvement of the traditional method to find the Lyapunov functions. A numerical simulation example is given to il ustrate the usage of the method.展开更多
The learning algorithms of causal discovery mainly include score-based methods and genetic algorithms(GA).The score-based algorithms are prone to searching space explosion.Classical GA is slow to converge,and prone to...The learning algorithms of causal discovery mainly include score-based methods and genetic algorithms(GA).The score-based algorithms are prone to searching space explosion.Classical GA is slow to converge,and prone to falling into local optima.To address these issues,an improved GA with domain knowledge(IGADK)is proposed.Firstly,domain knowledge is incorporated into the learning process of causality to construct a new fitness function.Secondly,a dynamical mutation operator is introduced in the algorithm to accelerate the convergence rate.Finally,an experiment is conducted on simulation data,which compares the classical GA with IGADK with domain knowledge of varying accuracy.The IGADK can greatly reduce the number of iterations,populations,and samples required for learning,which illustrates the efficiency and effectiveness of the proposed algorithm.展开更多
针对灰狼优化算法(Grey Wolf Optimizer, GWO)寻优精度低、收敛速度慢的问题,提出了一种基于IMQ惯性权重策略的自适应灰狼优化算法(ISGWO)。该算法利用IMQ函数的特性,实现对惯性权重的非线性调整,从而更好地平衡算法的全局勘探能力和局...针对灰狼优化算法(Grey Wolf Optimizer, GWO)寻优精度低、收敛速度慢的问题,提出了一种基于IMQ惯性权重策略的自适应灰狼优化算法(ISGWO)。该算法利用IMQ函数的特性,实现对惯性权重的非线性调整,从而更好地平衡算法的全局勘探能力和局部开发能力;同时,基于Sigmoid指数函数自适应更新个体位置,更好地搜索和优化问题的解空间。采用6个基本函数和29个CEC2017函数对ISGWO进行测试,并与6种常用的算法进行比较,实验结果表明ISGWO具有更优的收敛精度和速度。展开更多
基金supported in part by the National Outstanding Youth Foundation of P.R.China (60525303)the National Natural Science Foundation of P.R.China(60404022,60604004)+2 种基金the Natural Science Foundation of Hebei Province (102160)the special projects in mathematics funded by the Natural Science Foundation of Hebei Province(07M005)the NS of Education Office in Hebei Province (2004123).
文摘The Newton-Like algorithm with price estimation error in optimization flow control in network is analyzed. The estimation error is treated as inexactness of the gradient and the inexact descent direction is analyzed. Based on the optimization theory, a sufficient condition for convergence of this algorithm with bounded price estimation error is obtained. Furthermore, even when this sufficient condition doesn't hold, this algorithm can also converge, provided a modified step size, and an attraction region is obtained. Based on Lasalle's invariance principle applied to a suitable Lyapunov function, the dynamic system described by this algorithm is proved to be global stability if the error is zero. And the Newton-Like algorithm with bounded price estimation error is also globally stable if the error satisfies the sufficient condition for convergence. All trajectories ultimately converge to the equilibrium point.
基金supported by the National Natural Science Foundation of China(61273126)the Natural Science Foundation of Guangdong Province(10251064101000008+1 种基金S201210009675)the Fundamental Research Funds for the Central Universities(2012ZM0059)
文摘The asymptotic and stable properties of general stochastic functional differential equations are investigated by the multiple Lyapunov function method, which admits non-negative up-per bounds for the stochastic derivatives of the Lyapunov functions, a theorem for asymptotic properties of the LaSal e-type described by limit sets of the solutions of the equations is obtained. Based on the asymptotic properties to the limit set, a theorem of asymptotic stability of the stochastic functional differential equations is also established, which enables us to construct the Lyapunov functions more easily in application. Particularly, the wel-known classical theorem on stochastic stability is a special case of our result, the operator LV is not required to be negative which is more general to fulfil and the stochastic perturbation plays an important role in it. These show clearly the improvement of the traditional method to find the Lyapunov functions. A numerical simulation example is given to il ustrate the usage of the method.
基金supported by the National Social Science Fund of China(2022-SKJJ-B-084).
文摘The learning algorithms of causal discovery mainly include score-based methods and genetic algorithms(GA).The score-based algorithms are prone to searching space explosion.Classical GA is slow to converge,and prone to falling into local optima.To address these issues,an improved GA with domain knowledge(IGADK)is proposed.Firstly,domain knowledge is incorporated into the learning process of causality to construct a new fitness function.Secondly,a dynamical mutation operator is introduced in the algorithm to accelerate the convergence rate.Finally,an experiment is conducted on simulation data,which compares the classical GA with IGADK with domain knowledge of varying accuracy.The IGADK can greatly reduce the number of iterations,populations,and samples required for learning,which illustrates the efficiency and effectiveness of the proposed algorithm.
文摘针对灰狼优化算法(Grey Wolf Optimizer, GWO)寻优精度低、收敛速度慢的问题,提出了一种基于IMQ惯性权重策略的自适应灰狼优化算法(ISGWO)。该算法利用IMQ函数的特性,实现对惯性权重的非线性调整,从而更好地平衡算法的全局勘探能力和局部开发能力;同时,基于Sigmoid指数函数自适应更新个体位置,更好地搜索和优化问题的解空间。采用6个基本函数和29个CEC2017函数对ISGWO进行测试,并与6种常用的算法进行比较,实验结果表明ISGWO具有更优的收敛精度和速度。