[TiO2],t and pH are named as the independent variables and the percentage of decolor(DC%)as the dependent variable.A Box-Behnken design and U12(6×4×3)design with three factors were established to form the tr...[TiO2],t and pH are named as the independent variables and the percentage of decolor(DC%)as the dependent variable.A Box-Behnken design and U12(6×4×3)design with three factors were established to form the training set and the validation set,respectively.And back propagation neural network(BPNN)model is adopted in order to establish the model of photo-catalytic degradation about weak acidic dyes mixture of the GRS big red and weak acid red R.The best result shows that the correlation coefficient(R)is 0.9345 and the mean relative error between the predictive value and experimental value(MRE(%))is 3.23,for training set;the value of R is 0.9257,MRE(%)is 2.75,for the validation set.Besides,we discussed the influences of the pH,[TiO2],and t vs.DC% by the BPNN model.The optimized experimental condition obtained is pH=5.0,[TiO2]=1.50g/L,and t=40min based on the BPNN model,and combination with optimization of nonlinear constraints programming.The experimental value of DC is 99.23%,the predictive value is 98.98%,and the relative error is-0.25% between the predictive value and the experimental value,in the optimized experimental condition.Above all these indicate that the model can not only simulate the system of photo-catalytic degradation commendably but also can obtain the optimal experimental conditions.展开更多
Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS met...Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.展开更多
文摘[TiO2],t and pH are named as the independent variables and the percentage of decolor(DC%)as the dependent variable.A Box-Behnken design and U12(6×4×3)design with three factors were established to form the training set and the validation set,respectively.And back propagation neural network(BPNN)model is adopted in order to establish the model of photo-catalytic degradation about weak acidic dyes mixture of the GRS big red and weak acid red R.The best result shows that the correlation coefficient(R)is 0.9345 and the mean relative error between the predictive value and experimental value(MRE(%))is 3.23,for training set;the value of R is 0.9257,MRE(%)is 2.75,for the validation set.Besides,we discussed the influences of the pH,[TiO2],and t vs.DC% by the BPNN model.The optimized experimental condition obtained is pH=5.0,[TiO2]=1.50g/L,and t=40min based on the BPNN model,and combination with optimization of nonlinear constraints programming.The experimental value of DC is 99.23%,the predictive value is 98.98%,and the relative error is-0.25% between the predictive value and the experimental value,in the optimized experimental condition.Above all these indicate that the model can not only simulate the system of photo-catalytic degradation commendably but also can obtain the optimal experimental conditions.
文摘Conjugate gradient optimization algorithms depend on the search directions with different choices for the parameters in the search directions. In this note, by combining the nice numerical performance of PR and HS methods with the global convergence property of the class of conjugate gradient methods presented by HU and STOREY(1991), a class of new restarting conjugate gradient methods is presented. Global convergences of the new method with two kinds of common line searches, are proved. Firstly, it is shown that, using reverse modulus of continuity function and forcing function, the new method for solving unconstrained optimization can work for a continously dif ferentiable function with Curry-Altman's step size rule and a bounded level set. Secondly, by using comparing technique, some general convergence properties of the new method with other kind of step size rule are established. Numerical experiments show that the new method is efficient by comparing with FR conjugate gradient method.