The penalty function method, presented many years ago, is an important nu- merical method for the mathematical programming problems. In this article, we propose a dual-relax penalty function approach, which is signifi...The penalty function method, presented many years ago, is an important nu- merical method for the mathematical programming problems. In this article, we propose a dual-relax penalty function approach, which is significantly different from penalty func- tion approach existing for solving the bilevel programming, to solve the nonlinear bilevel programming with linear lower level problem. Our algorithm will redound to the error analysis for computing an approximate solution to the bilevel programming. The error estimate is obtained among the optimal objective function value of the dual-relax penalty problem and of the original bilevel programming problem. An example is illustrated to show the feasibility of the proposed approach.展开更多
By redefining the multiplier associated with inequality constraint as a positive definite function of the originally-defined multiplier, say, u2_i, i=1, 2, ..., m, nonnegative constraints imposed on inequality constra...By redefining the multiplier associated with inequality constraint as a positive definite function of the originally-defined multiplier, say, u2_i, i=1, 2, ..., m, nonnegative constraints imposed on inequality constraints in Karush-Kuhn-Tucker necessary conditions are removed. For constructing the Lagrange neural network and Lagrange multiplier method, it is no longer necessary to convert inequality constraints into equality constraints by slack variables in order to reuse those results dedicated to equality constraints, and they can be similarly proved with minor modification. Utilizing this technique, a new type of Lagrange neural network and a new type of Lagrange multiplier method are devised, which both handle inequality constraints directly. Also, their stability and convergence are analyzed rigorously.展开更多
In this paper,we improve the algorithm proposed by T.F.Colemen and A.R.Conn in paper [1]. It is shown that the improved algorithm is possessed of global convergence and under some conditions it can obtain locally supp...In this paper,we improve the algorithm proposed by T.F.Colemen and A.R.Conn in paper [1]. It is shown that the improved algorithm is possessed of global convergence and under some conditions it can obtain locally supperlinear convergence which is not possessed by the original algorithm.展开更多
A new preamble structure and design method for orthogonal frequency division multiplexing(OFDM)systems is described,which results a two-symbol long training preamble.The preamble contains four parts,the first part i...A new preamble structure and design method for orthogonal frequency division multiplexing(OFDM)systems is described,which results a two-symbol long training preamble.The preamble contains four parts,the first part is the same as the third,and the four parts are calculated by using nonlinear programming(NLP)model such that the moving correlation of the preamble results a steep rectangular-like pulse of certain width,whose step-down indicates the timing offset.Simulation results in AWGN channel are given to evaluate the perf o rmance of the proposed preamble design.展开更多
Hanson and Mond have grven sets of necessary and sufficient conditions for optimality in constrained optimization by introducing classes of generalized functions, called type Ⅰ functions. Recently, Bector definded un...Hanson and Mond have grven sets of necessary and sufficient conditions for optimality in constrained optimization by introducing classes of generalized functions, called type Ⅰ functions. Recently, Bector definded univex functions, a new class of functions that unifies several concepts of generalized convexity. In this paper, additional conditions are attached to the Kuhn Tucker conditions giving a set of conditions which are both necessary and sufficient for optimality in constrained optimization, under appropriate constraint qualifications.展开更多
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS met...Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.展开更多
In this paper,we emphasize on a nondifferentiable minimax fractional programming(NMFP)problem and obtain appropriate duality results for higher-order dual model under higher-order B-(p,r)-invex functions.We provide a ...In this paper,we emphasize on a nondifferentiable minimax fractional programming(NMFP)problem and obtain appropriate duality results for higher-order dual model under higher-order B-(p,r)-invex functions.We provide a nontrivial illustration of a function which belongs to the class of higher-order B-(p,r)-invex but not in the class of second-order B-(p,r)-invex functions already existing in literature.An example of finding a minimax solution of NMFP problem by using higher-order B-(p,r)-invex functions has also been given.Various known results are discussed as particular cases.展开更多
基金supported by the National Science Foundation of China (70771080)Social Science Foundation of Ministry of Education (10YJC630233)
文摘The penalty function method, presented many years ago, is an important nu- merical method for the mathematical programming problems. In this article, we propose a dual-relax penalty function approach, which is significantly different from penalty func- tion approach existing for solving the bilevel programming, to solve the nonlinear bilevel programming with linear lower level problem. Our algorithm will redound to the error analysis for computing an approximate solution to the bilevel programming. The error estimate is obtained among the optimal objective function value of the dual-relax penalty problem and of the original bilevel programming problem. An example is illustrated to show the feasibility of the proposed approach.
文摘By redefining the multiplier associated with inequality constraint as a positive definite function of the originally-defined multiplier, say, u2_i, i=1, 2, ..., m, nonnegative constraints imposed on inequality constraints in Karush-Kuhn-Tucker necessary conditions are removed. For constructing the Lagrange neural network and Lagrange multiplier method, it is no longer necessary to convert inequality constraints into equality constraints by slack variables in order to reuse those results dedicated to equality constraints, and they can be similarly proved with minor modification. Utilizing this technique, a new type of Lagrange neural network and a new type of Lagrange multiplier method are devised, which both handle inequality constraints directly. Also, their stability and convergence are analyzed rigorously.
文摘In this paper,we improve the algorithm proposed by T.F.Colemen and A.R.Conn in paper [1]. It is shown that the improved algorithm is possessed of global convergence and under some conditions it can obtain locally supperlinear convergence which is not possessed by the original algorithm.
基金supported by the National Natural Science Foundation of China under Grant No. 60501018
文摘A new preamble structure and design method for orthogonal frequency division multiplexing(OFDM)systems is described,which results a two-symbol long training preamble.The preamble contains four parts,the first part is the same as the third,and the four parts are calculated by using nonlinear programming(NLP)model such that the moving correlation of the preamble results a steep rectangular-like pulse of certain width,whose step-down indicates the timing offset.Simulation results in AWGN channel are given to evaluate the perf o rmance of the proposed preamble design.
文摘Hanson and Mond have grven sets of necessary and sufficient conditions for optimality in constrained optimization by introducing classes of generalized functions, called type Ⅰ functions. Recently, Bector definded univex functions, a new class of functions that unifies several concepts of generalized convexity. In this paper, additional conditions are attached to the Kuhn Tucker conditions giving a set of conditions which are both necessary and sufficient for optimality in constrained optimization, under appropriate constraint qualifications.
文摘Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.
文摘In this paper,we emphasize on a nondifferentiable minimax fractional programming(NMFP)problem and obtain appropriate duality results for higher-order dual model under higher-order B-(p,r)-invex functions.We provide a nontrivial illustration of a function which belongs to the class of higher-order B-(p,r)-invex but not in the class of second-order B-(p,r)-invex functions already existing in literature.An example of finding a minimax solution of NMFP problem by using higher-order B-(p,r)-invex functions has also been given.Various known results are discussed as particular cases.
文摘半挂车辆的非稳定运动学特性为其泊车过程中自主运动规划带来严峻挑战。针对半挂车在多障碍物的静态场景中泊车运动规划算法效率低、结果平滑性差等问题,本文提出了序列式运动规划方法(sequential motion planning algorithm,SMPA)。首先,提出了基于二次规划策略和改进双向快速扩展随机树(bidirectional rapidly-exploring random tree algorithm,Bi-RRT)的初始路径生成方法。然后,结合车辆非完整微分约束下的路径节点可行性判别方法研究,提出基于概率的目标偏向采样策略,提高了采样效率。最后,构建了面向车辆系统控制变量连续性的非线性最优化控制模型,解决泊车换向点的对接问题,提高了泊车轨迹平滑性。仿真结果表明,该方法在多障碍物场景中,规划时间相比Hybrid A*和Bi-RRT分别降低了86.71%和21.44%,轨迹质量也更具优越性。