Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is inf...Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planti...Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planting system(HDPS)offers a viable method to enhance productivity by increasing plant populations per unit area,optimizing resource utilization,and facilitating machine picking.Cotton is an indeterminate plant that produce excessive vegeta-tive growth in favorable soil fertility and moisture conditions,which posing challenges for efficient machine picking.To address this issue,the application of plant growth retardants(PGRs)is essential for controlling canopy architecture.PGRs reduce internode elongation,promote regulated branching,and increase plant compactness,making cotton plants better suited for machine picking.PGRs application also optimizes photosynthates distribution between veg-etative and reproductive growth,resulting in higher yields and improved fibre quality.The integration of HDPS and PGRs applications results in an optimal plant architecture for improving machine picking efficiency.However,the success of this integration is determined by some factors,including cotton variety,environmental conditions,and geographical variations.These approaches not only address yield stagnation and labour shortages but also help to establish more effective and sustainable cotton farming practices,resulting in higher cotton productivity.展开更多
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode...Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.展开更多
The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elem...The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elements. Aiming at the problem of insufficient accuracy of the existing physical models for predicting the peak overpressure of ground reflected waves, two physics-informed machine learning models are constructed. The results demonstrate that the machine learning models, which incorporate physical information by predicting the deviation between the physical model and actual values and adding a physical loss term in the loss function, can accurately predict both the training and out-oftraining dataset. Compared to existing physical models, the average relative error in the predicted training domain is reduced from 17.459%-48.588% to 2%, and the proportion of average relative error less than 20% increased from 0% to 59.4% to more than 99%. In addition, the relative average error outside the prediction training set range is reduced from 14.496%-29.389% to 5%, and the proportion of relative average error less than 20% increased from 0% to 71.39% to more than 99%. The inclusion of a physical loss term enforcing monotonicity in the loss function effectively improves the extrapolation performance of machine learning. The findings of this study provide valuable reference for explosion hazard assessment and anti-explosion structural design in various fields.展开更多
As the simplest hydrogen-bonded alcohol,liquid methanol has attracted intensive experimental and theoretical interest.However,theoretical investigations on this system have primarily relied on empirical intermolecular...As the simplest hydrogen-bonded alcohol,liquid methanol has attracted intensive experimental and theoretical interest.However,theoretical investigations on this system have primarily relied on empirical intermolecular force fields or ab initio molecular dynamics with semilocal density functionals.Inspired by recent studies on bulk water using increasingly accurate machine learning force fields,we report a new machine learning force field for liquid methanol with a hybrid functional revPBE0 plus dispersion correction.Molecular dynamics simulations on this machine learning force field are orders of magnitude faster than ab initio molecular dynamics simulations,yielding the radial distribution functions,selfdiffusion coefficients,and hydrogen bond network properties with very small statistical errors.The resulting structural and dynamical properties are compared well with the experimental data,demonstrating the superior accuracy of this machine learning force field.This work represents a successful step toward a first-principles description of this benchmark system and showcases the general applicability of the machine learning force field in studying liquid systems.展开更多
Many digital platforms have employed free-content promotion strategies to deal with the high uncertainty levels regarding digital content products.However,the diversity of digital content products and user heterogenei...Many digital platforms have employed free-content promotion strategies to deal with the high uncertainty levels regarding digital content products.However,the diversity of digital content products and user heterogeneity in content preference may blur the impact of platform promotions across users and products.Therefore,free-content promotion strategies should be adapted to allocate marketing resources optimally and increase revenue.This study develops personal-ized free-content promotion strategies based on individual-level heterogeneous treatment effects and explores the causes of their heterogeneity,focusing on the moderating effect of user engagement-related variables.To this end,we utilize ran-dom field experimental data provided by a top Chinese e-book platform.We employ a framework that combines machine learning with econometric causal inference methods to estimate individual treatment effects and analyze their potential mechanisms.The analysis shows that,on average,free-content promotions lead to a significant increase in consumer pay-ments.However,the higher the level of user engagement,the lower the payment lift caused by promotions,as more-engaged users are more strongly affected by the cannibalization effect of free-content promotion.This study introduces a novel causal research design to help platforms improve their marketing strategies.展开更多
Compared with traditional feedback control,predictive control can eliminate the lag of pose control and avoid the snakelike motion of shield machines.Therefore,a shield pose prediction model was proposed based on dyna...Compared with traditional feedback control,predictive control can eliminate the lag of pose control and avoid the snakelike motion of shield machines.Therefore,a shield pose prediction model was proposed based on dynamic modeling.Firstly,the dynamic equations of shield thrust system were established to clarify the relationship between force and movement of shield machine.Secondly,an analytical model was proposed to predict future multistep pose of the shield machine.Finally,a virtual prototype model was developed to simulate the dynamic behavior of the shield machine and validate the accuracy of the proposed pose prediction method.Results reveal that the model proposed can predict the shield pose with high accuracy,which can provide a decision basis whether for manual or automatic control of shield pose.展开更多
Landslide susceptibility mapping is a crucial tool for disaster prevention and management.The performance of conventional data-driven model is greatly influenced by the quality of the samples data.The random selection...Landslide susceptibility mapping is a crucial tool for disaster prevention and management.The performance of conventional data-driven model is greatly influenced by the quality of the samples data.The random selection of negative samples results in the lack of interpretability throughout the assessment process.To address this limitation and construct a high-quality negative samples database,this study introduces a physics-informed machine learning approach,combining the random forest model with Scoops 3D,to optimize the negative samples selection strategy and assess the landslide susceptibility of the study area.The Scoops 3D is employed to determine the factor of safety value leveraging Bishop’s simplified method.Instead of conventional random selection,negative samples are extracted from the areas with a high factor of safety value.Subsequently,the results of conventional random forest model and physics-informed data-driven model are analyzed and discussed,focusing on model performance and prediction uncertainty.In comparison to conventional methods,the physics-informed model,set with a safety area threshold of 3,demonstrates a noteworthy improvement in the mean AUC value by 36.7%,coupled with a reduced prediction uncertainty.It is evident that the determination of the safety area threshold exerts an impact on both prediction uncertainty and model performance.展开更多
The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can caus...The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can cause changes in cutting force/heat,resulting in affecting gear machining precision.Therefore,this paper studies the effect of different process parameters on gear machining precision.A multi-objective optimization model is established for the relationship between process parameters and tooth surface deviations,tooth profile deviations,and tooth lead deviations through the cutting speed,feed rate,and cutting depth of the worm wheel gear grinding machine.The response surface method(RSM)is used for experimental design,and the corresponding experimental results and optimal process parameters are obtained.Subsequently,gray relational analysis-principal component analysis(GRA-PCA),particle swarm optimization(PSO),and genetic algorithm-particle swarm optimization(GA-PSO)methods are used to analyze the experimental results and obtain different optimal process parameters.The results show that optimal process parameters obtained by the GRA-PCA,PSO,and GA-PSO methods improve the gear machining precision.Moreover,the gear machining precision obtained by GA-PSO is superior to other methods.展开更多
To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of...To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of DTSVM highly depends on its structure, to cluster the multi-classes with maximum distance between the clustering centers of the two sub-classes, genetic algorithm is introduced into the formation of decision tree, so that the most separable classes would be separated at each node of decisions tree. Numerical simulations conducted on three datasets compared with "one-against-all" and "one-against-one" demonstrate the proposed method has better performance and higher generalization ability than the two conventional methods.展开更多
The support vector machine (SVM) is a novel machine learning method, which has the ability to approximate nonlinear functions with arbitrary accuracy. Setting parameters well is very crucial for SVM learning results...The support vector machine (SVM) is a novel machine learning method, which has the ability to approximate nonlinear functions with arbitrary accuracy. Setting parameters well is very crucial for SVM learning results and generalization ability, and now there is no systematic, general method for parameter selection. In this article, the SVM parameter selection for function approximation is regarded as a compound optimization problem and a mutative scale chaos optimization algorithm is employed to search for optimal paraxneter values. The chaos optimization algorithm is an effective way for global optimal and the mutative scale chaos algorithm could improve the search efficiency and accuracy. Several simulation examples show the sensitivity of the SVM parameters and demonstrate the superiority of this proposed method for nonlinear function approximation.展开更多
Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-...Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-based VM deployment algorithm for energy efficiency) is presented. The proposed algorithm uses linear weighted method to predict the load of a host and classifies the hosts in the data center, based on the predicted host load, into four classes for the purpose of VMs migration. We also propose four types of VM selection algorithms for the purpose of determining potential VMs to be migrated. We performed extensive performance analysis of the proposed algorithms. Experimental results show that, in contrast to other energy-saving algorithms, the algorithm proposed in this work significantly reduces the energy consumption and maintains low service level agreement(SLA) violations.展开更多
A self-adaptive large neighborhood search method for scheduling n jobs on m non-identical parallel machines with mul- tiple time windows is presented. The problems' another feature lies in oversubscription, namely no...A self-adaptive large neighborhood search method for scheduling n jobs on m non-identical parallel machines with mul- tiple time windows is presented. The problems' another feature lies in oversubscription, namely not all jobs can be scheduled within specified scheduling horizons due to the limited machine capacity. The objective is thus to maximize the overall profits of processed jobs while respecting machine constraints. A first-in- first-out heuristic is applied to find an initial solution, and then a large neighborhood search procedure is employed to relax and re- optimize cumbersome solutions. A machine learning mechanism is also introduced to converge on the most efficient neighborhoods for the problem. Extensive computational results are presented based on data from an application involving the daily observation scheduling of a fleet of earth observing satellites. The method rapidly solves most problem instances to optimal or near optimal and shows a robust performance in sensitive analysis.展开更多
Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold l...Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold learning and particle swarm optimization support vector machine(PSO-SVM) is studied. This fault diagnosis model is used for a rolling bearing experimental of three kinds faults. The results are verified that this model based on multi-manifold learning and PSO-SVM is good at the fault sensitive features acquisition with effective accuracy.展开更多
Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very la...Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, for training SVM is introduted. The method is tested on UCI datasets.展开更多
A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is oppo...A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is opposite to that of naive 1-fold cross validation. As opposed to naive l-fold cross validation, fast l-fold cross validation takes the advantage in terms of computational time, especially for the large fold number such as l 〉 20. To corroborate the efficacy and feasibility of fast l-fold cross validation, experiments on five benchmark regression data sets are evaluated.展开更多
文摘Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
文摘Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planting system(HDPS)offers a viable method to enhance productivity by increasing plant populations per unit area,optimizing resource utilization,and facilitating machine picking.Cotton is an indeterminate plant that produce excessive vegeta-tive growth in favorable soil fertility and moisture conditions,which posing challenges for efficient machine picking.To address this issue,the application of plant growth retardants(PGRs)is essential for controlling canopy architecture.PGRs reduce internode elongation,promote regulated branching,and increase plant compactness,making cotton plants better suited for machine picking.PGRs application also optimizes photosynthates distribution between veg-etative and reproductive growth,resulting in higher yields and improved fibre quality.The integration of HDPS and PGRs applications results in an optimal plant architecture for improving machine picking efficiency.However,the success of this integration is determined by some factors,including cotton variety,environmental conditions,and geographical variations.These approaches not only address yield stagnation and labour shortages but also help to establish more effective and sustainable cotton farming practices,resulting in higher cotton productivity.
文摘Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.
文摘The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elements. Aiming at the problem of insufficient accuracy of the existing physical models for predicting the peak overpressure of ground reflected waves, two physics-informed machine learning models are constructed. The results demonstrate that the machine learning models, which incorporate physical information by predicting the deviation between the physical model and actual values and adding a physical loss term in the loss function, can accurately predict both the training and out-oftraining dataset. Compared to existing physical models, the average relative error in the predicted training domain is reduced from 17.459%-48.588% to 2%, and the proportion of average relative error less than 20% increased from 0% to 59.4% to more than 99%. In addition, the relative average error outside the prediction training set range is reduced from 14.496%-29.389% to 5%, and the proportion of relative average error less than 20% increased from 0% to 71.39% to more than 99%. The inclusion of a physical loss term enforcing monotonicity in the loss function effectively improves the extrapolation performance of machine learning. The findings of this study provide valuable reference for explosion hazard assessment and anti-explosion structural design in various fields.
基金supported by the CAS Project for Young Scientists in Basic Research(YSBR-005)the National Natural Science Foundation of China(22325304,22221003 and 22033007)We acknowledge the Supercomputing Center of USTC,Hefei Advanced Computing Center,Beijing PARATERA Tech Co.,Ltd.,for providing high-performance computing services。
文摘As the simplest hydrogen-bonded alcohol,liquid methanol has attracted intensive experimental and theoretical interest.However,theoretical investigations on this system have primarily relied on empirical intermolecular force fields or ab initio molecular dynamics with semilocal density functionals.Inspired by recent studies on bulk water using increasingly accurate machine learning force fields,we report a new machine learning force field for liquid methanol with a hybrid functional revPBE0 plus dispersion correction.Molecular dynamics simulations on this machine learning force field are orders of magnitude faster than ab initio molecular dynamics simulations,yielding the radial distribution functions,selfdiffusion coefficients,and hydrogen bond network properties with very small statistical errors.The resulting structural and dynamical properties are compared well with the experimental data,demonstrating the superior accuracy of this machine learning force field.This work represents a successful step toward a first-principles description of this benchmark system and showcases the general applicability of the machine learning force field in studying liquid systems.
基金supported by the Anhui Postdoctoral Scientific Research Program Foundation(2022B579).
文摘Many digital platforms have employed free-content promotion strategies to deal with the high uncertainty levels regarding digital content products.However,the diversity of digital content products and user heterogeneity in content preference may blur the impact of platform promotions across users and products.Therefore,free-content promotion strategies should be adapted to allocate marketing resources optimally and increase revenue.This study develops personal-ized free-content promotion strategies based on individual-level heterogeneous treatment effects and explores the causes of their heterogeneity,focusing on the moderating effect of user engagement-related variables.To this end,we utilize ran-dom field experimental data provided by a top Chinese e-book platform.We employ a framework that combines machine learning with econometric causal inference methods to estimate individual treatment effects and analyze their potential mechanisms.The analysis shows that,on average,free-content promotions lead to a significant increase in consumer pay-ments.However,the higher the level of user engagement,the lower the payment lift caused by promotions,as more-engaged users are more strongly affected by the cannibalization effect of free-content promotion.This study introduces a novel causal research design to help platforms improve their marketing strategies.
基金Project(2023JBZY030)supported by the Fundamental Research Funds for the Central Universities,ChinaProject(U1834208)supported by the National Natural Science Foundation of China。
文摘Compared with traditional feedback control,predictive control can eliminate the lag of pose control and avoid the snakelike motion of shield machines.Therefore,a shield pose prediction model was proposed based on dynamic modeling.Firstly,the dynamic equations of shield thrust system were established to clarify the relationship between force and movement of shield machine.Secondly,an analytical model was proposed to predict future multistep pose of the shield machine.Finally,a virtual prototype model was developed to simulate the dynamic behavior of the shield machine and validate the accuracy of the proposed pose prediction method.Results reveal that the model proposed can predict the shield pose with high accuracy,which can provide a decision basis whether for manual or automatic control of shield pose.
基金Project(G2022165004L)supported by the High-end Foreign Expert Introduction Program,ChinaProject(2021XM3008)supported by the Special Foundation of Postdoctoral Support Program,Chongqing,China+1 种基金Project(2018-ZL-01)supported by the Sichuan Transportation Science and Technology Project,ChinaProject(HZ2021001)supported by the Chongqing Municipal Education Commission,China。
文摘Landslide susceptibility mapping is a crucial tool for disaster prevention and management.The performance of conventional data-driven model is greatly influenced by the quality of the samples data.The random selection of negative samples results in the lack of interpretability throughout the assessment process.To address this limitation and construct a high-quality negative samples database,this study introduces a physics-informed machine learning approach,combining the random forest model with Scoops 3D,to optimize the negative samples selection strategy and assess the landslide susceptibility of the study area.The Scoops 3D is employed to determine the factor of safety value leveraging Bishop’s simplified method.Instead of conventional random selection,negative samples are extracted from the areas with a high factor of safety value.Subsequently,the results of conventional random forest model and physics-informed data-driven model are analyzed and discussed,focusing on model performance and prediction uncertainty.In comparison to conventional methods,the physics-informed model,set with a safety area threshold of 3,demonstrates a noteworthy improvement in the mean AUC value by 36.7%,coupled with a reduced prediction uncertainty.It is evident that the determination of the safety area threshold exerts an impact on both prediction uncertainty and model performance.
基金Projects(U22B2084,52275483,52075142)supported by the National Natural Science Foundation of ChinaProject(2023ZY01050)supported by the Ministry of Industry and Information Technology High Quality Development,China。
文摘The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can cause changes in cutting force/heat,resulting in affecting gear machining precision.Therefore,this paper studies the effect of different process parameters on gear machining precision.A multi-objective optimization model is established for the relationship between process parameters and tooth surface deviations,tooth profile deviations,and tooth lead deviations through the cutting speed,feed rate,and cutting depth of the worm wheel gear grinding machine.The response surface method(RSM)is used for experimental design,and the corresponding experimental results and optimal process parameters are obtained.Subsequently,gray relational analysis-principal component analysis(GRA-PCA),particle swarm optimization(PSO),and genetic algorithm-particle swarm optimization(GA-PSO)methods are used to analyze the experimental results and obtain different optimal process parameters.The results show that optimal process parameters obtained by the GRA-PCA,PSO,and GA-PSO methods improve the gear machining precision.Moreover,the gear machining precision obtained by GA-PSO is superior to other methods.
基金supported by the National Natural Science Foundation of China (60604021 60874054)
文摘To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of DTSVM highly depends on its structure, to cluster the multi-classes with maximum distance between the clustering centers of the two sub-classes, genetic algorithm is introduced into the formation of decision tree, so that the most separable classes would be separated at each node of decisions tree. Numerical simulations conducted on three datasets compared with "one-against-all" and "one-against-one" demonstrate the proposed method has better performance and higher generalization ability than the two conventional methods.
基金the National Nature Science Foundation of China (60775047, 60402024)
文摘The support vector machine (SVM) is a novel machine learning method, which has the ability to approximate nonlinear functions with arbitrary accuracy. Setting parameters well is very crucial for SVM learning results and generalization ability, and now there is no systematic, general method for parameter selection. In this article, the SVM parameter selection for function approximation is regarded as a compound optimization problem and a mutative scale chaos optimization algorithm is employed to search for optimal paraxneter values. The chaos optimization algorithm is an effective way for global optimal and the mutative scale chaos algorithm could improve the search efficiency and accuracy. Several simulation examples show the sensitivity of the SVM parameters and demonstrate the superiority of this proposed method for nonlinear function approximation.
基金Projects(61572525,61272148)supported by the National Natural Science Foundation of ChinaProject(20120162110061)supported by the PhD Programs Foundation of Ministry of Education of China+1 种基金Project(CX2014B066)supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044)supported by the Fundamental Research Funds for the Central Universities,China
文摘Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-based VM deployment algorithm for energy efficiency) is presented. The proposed algorithm uses linear weighted method to predict the load of a host and classifies the hosts in the data center, based on the predicted host load, into four classes for the purpose of VMs migration. We also propose four types of VM selection algorithms for the purpose of determining potential VMs to be migrated. We performed extensive performance analysis of the proposed algorithms. Experimental results show that, in contrast to other energy-saving algorithms, the algorithm proposed in this work significantly reduces the energy consumption and maintains low service level agreement(SLA) violations.
基金supported by the National Natural Science Foundation of China (7060103570801062)
文摘A self-adaptive large neighborhood search method for scheduling n jobs on m non-identical parallel machines with mul- tiple time windows is presented. The problems' another feature lies in oversubscription, namely not all jobs can be scheduled within specified scheduling horizons due to the limited machine capacity. The objective is thus to maximize the overall profits of processed jobs while respecting machine constraints. A first-in- first-out heuristic is applied to find an initial solution, and then a large neighborhood search procedure is employed to relax and re- optimize cumbersome solutions. A machine learning mechanism is also introduced to converge on the most efficient neighborhoods for the problem. Extensive computational results are presented based on data from an application involving the daily observation scheduling of a fleet of earth observing satellites. The method rapidly solves most problem instances to optimal or near optimal and shows a robust performance in sensitive analysis.
基金Beijing Natural Science Foundation(KZ201211232039)National Natural Science Foundation of China(51275052)+1 种基金Funding Project for Academic Human Resources Development in Institutions of Higher Learning under the Jurisdiction of Beijing Municipalipality(PHR201106132)PXM2014_014224_000080
文摘Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold learning and particle swarm optimization support vector machine(PSO-SVM) is studied. This fault diagnosis model is used for a rolling bearing experimental of three kinds faults. The results are verified that this model based on multi-manifold learning and PSO-SVM is good at the fault sensitive features acquisition with effective accuracy.
文摘Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, for training SVM is introduted. The method is tested on UCI datasets.
基金supported by the National Natural Science Foundation of China(51006052)the NUST Outstanding Scholar Supporting Program
文摘A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is opposite to that of naive 1-fold cross validation. As opposed to naive l-fold cross validation, fast l-fold cross validation takes the advantage in terms of computational time, especially for the large fold number such as l 〉 20. To corroborate the efficacy and feasibility of fast l-fold cross validation, experiments on five benchmark regression data sets are evaluated.