Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planti...Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planting system(HDPS)offers a viable method to enhance productivity by increasing plant populations per unit area,optimizing resource utilization,and facilitating machine picking.Cotton is an indeterminate plant that produce excessive vegeta-tive growth in favorable soil fertility and moisture conditions,which posing challenges for efficient machine picking.To address this issue,the application of plant growth retardants(PGRs)is essential for controlling canopy architecture.PGRs reduce internode elongation,promote regulated branching,and increase plant compactness,making cotton plants better suited for machine picking.PGRs application also optimizes photosynthates distribution between veg-etative and reproductive growth,resulting in higher yields and improved fibre quality.The integration of HDPS and PGRs applications results in an optimal plant architecture for improving machine picking efficiency.However,the success of this integration is determined by some factors,including cotton variety,environmental conditions,and geographical variations.These approaches not only address yield stagnation and labour shortages but also help to establish more effective and sustainable cotton farming practices,resulting in higher cotton productivity.展开更多
The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to ...The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI.展开更多
3-Nitro-1,2,4-triazol-5-one(NTO)is a typical high-energy,low-sensitivity explosive,and accurate concentration monitoring is critical for crystallization process control.In this study,a high-precision quantitative anal...3-Nitro-1,2,4-triazol-5-one(NTO)is a typical high-energy,low-sensitivity explosive,and accurate concentration monitoring is critical for crystallization process control.In this study,a high-precision quantitative analytical model for NTO concentration in ethanol solutions was developed by integrating real-time ATR-FTIR spectroscopy with chemometric and machine learning techniques.Dynamic spectral data were obtained by designing multi-concentration gradient heating-cooling cycle experiments,abnormal samples were eliminated using the isolation forest algorithm,and the effects of various preprocessing methods on model performance were systematically evaluated.The results show that partial least squares regression(PLSR)exhibits superior generalization ability compared to other models.Vibrational bands corresponding to C=O and–NO_(2)were identified as key predictors for concentration estimation.This work provides an efficient and reliable solution for real-time concentration monitoring during NTO crystallization and holds significant potential for process analytical applications in energetic material manufacturing.展开更多
Background The geo-traceability of cotton is crucial for ensuring the quality and integrity of cotton brands. However, effective methods for achieving this traceability are currently lacking. This study investigates t...Background The geo-traceability of cotton is crucial for ensuring the quality and integrity of cotton brands. However, effective methods for achieving this traceability are currently lacking. This study investigates the potential of explainable machine learning for the geo-traceability of raw cotton.Results The findings indicate that principal component analysis(PCA) exhibits limited effectiveness in tracing cotton origins. In contrast, partial least squares discriminant analysis(PLS-DA) demonstrates superior classification performance, identifying seven discriminating variables: Na, Mn, Ba, Rb, Al, As, and Pb. The use of decision tree(DT), support vector machine(SVM), and random forest(RF) models for origin discrimination yielded accuracies of 90%, 87%, and 97%, respectively. Notably, the light gradient boosting machine(Light GBM) model achieved perfect performance metrics, with accuracy, precision, and recall rate all reaching 100% on the test set. The output of the Light GBM model was further evaluated using the SHapley Additive ex Planation(SHAP) technique, which highlighted differences in the elemental composition of raw cotton from various countries. Specifically, the elements Pb, Ni, Na, Al, As, Ba, and Rb significantly influenced the model's predictions.Conclusion These findings suggest that explainable machine learning techniques can provide insights into the complex relationships between geographic information and raw cotton. Consequently, these methodologies enhances the precision and reliability of geographic traceability for raw cotton.展开更多
Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is inf...Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency.展开更多
Blast-induced ground vibration,quantified by peak particle velocity(PPV),is a crucial factor in mitigating environmental and structural risks in mining and geotechnical engineering.Accurate PPV prediction facilitates ...Blast-induced ground vibration,quantified by peak particle velocity(PPV),is a crucial factor in mitigating environmental and structural risks in mining and geotechnical engineering.Accurate PPV prediction facilitates safer and more sustainable blasting operations by minimizing adverse impacts and ensuring regulatory compliance.This study presents an advanced predictive framework integrating Cat Boost(CB)with nature-inspired optimization algorithms,including the Bat Algorithm(BAT),Sparrow Search Algorithm(SSA),Butterfly Optimization Algorithm(BOA),and Grasshopper Optimization Algorithm(GOA).A comprehensive dataset from the Sarcheshmeh Copper Mine in Iran was utilized to develop and evaluate these models using key performance metrics such as the Index of Agreement(IoA),Nash-Sutcliffe Efficiency(NSE),and the coefficient of determination(R^(2)).The hybrid CB-BOA model outperformed other approaches,achieving the highest accuracy(R^(2)=0.989)and the lowest prediction errors.SHAP analysis identified Distance(Di)as the most influential variable affecting PPV,while uncertainty analysis confirmed CB-BOA as the most reliable model,featuring the narrowest prediction interval.These findings highlight the effectiveness of hybrid machine learning models in refining PPV predictions,contributing to improved blast design strategies,enhanced structural safety,and reduced environmental impacts in mining and geotechnical engineering.展开更多
Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological...Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,wh...A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,which reduces the risk of penetrating the bulkhead.In the realm of hypervelocity impact,strain rate(>10^(5)s^(-1))effects are negligible,and fluid dynamics is employed to describe the impact process.Efficient numerical tools for precisely predicting the damage degree can greatly accelerate the design and optimization of advanced protective structures.Current hypervelocity impact research primarily focuses on the interaction between projectile and front plate and the movement of debris cloud.However,the damage mechanism of debris cloud impacts on rear plates-the critical threat component-remains underexplored owing to complex multi-physics processes and prohibitive computational costs.Existing approaches,ranging from semi-empirical equations to a machine learningbased ballistic limit prediction method,are constrained to binary penetration classification.Alternatively,the uneven data from experiments and simulations caused these methods to be ineffective when the projectile has irregular shapes and complicate flight attitude.Therefore,it is urgent to develop a new damage prediction method for predicting the rear plate damage,which can help to gain a deeper understanding of the damage mechanism.In this study,a machine learning(ML)method is developed to predict the damage distribution in the rear plate.Based on the unit velocity space,the discretized information of debris cloud and rear plate damage from rare simulation cases is used as input data for training the ML models,while the generalization ability for damage distribution prediction is tested by other simulation cases with different attack angles.The results demonstrate that the training and prediction accuracies using the Random Forest(RF)algorithm significantly surpass those using Artificial Neural Networks(ANNs)and Support Vector Machine(SVM).The RF-based model effectively identifies damage features in sparsely distributed debris cloud and cumulative effect.This study establishes an expandable new dataset that accommodates additional parameters to improve the prediction accuracy.Results demonstrate the model's ability to overcome data imbalance limitations through debris cloud features,enabling rapid and accurate rear plate damage prediction across wider scenarios with minimal data requirements.展开更多
To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of...To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of DTSVM highly depends on its structure, to cluster the multi-classes with maximum distance between the clustering centers of the two sub-classes, genetic algorithm is introduced into the formation of decision tree, so that the most separable classes would be separated at each node of decisions tree. Numerical simulations conducted on three datasets compared with "one-against-all" and "one-against-one" demonstrate the proposed method has better performance and higher generalization ability than the two conventional methods.展开更多
The support vector machine (SVM) is a novel machine learning method, which has the ability to approximate nonlinear functions with arbitrary accuracy. Setting parameters well is very crucial for SVM learning results...The support vector machine (SVM) is a novel machine learning method, which has the ability to approximate nonlinear functions with arbitrary accuracy. Setting parameters well is very crucial for SVM learning results and generalization ability, and now there is no systematic, general method for parameter selection. In this article, the SVM parameter selection for function approximation is regarded as a compound optimization problem and a mutative scale chaos optimization algorithm is employed to search for optimal paraxneter values. The chaos optimization algorithm is an effective way for global optimal and the mutative scale chaos algorithm could improve the search efficiency and accuracy. Several simulation examples show the sensitivity of the SVM parameters and demonstrate the superiority of this proposed method for nonlinear function approximation.展开更多
Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-...Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-based VM deployment algorithm for energy efficiency) is presented. The proposed algorithm uses linear weighted method to predict the load of a host and classifies the hosts in the data center, based on the predicted host load, into four classes for the purpose of VMs migration. We also propose four types of VM selection algorithms for the purpose of determining potential VMs to be migrated. We performed extensive performance analysis of the proposed algorithms. Experimental results show that, in contrast to other energy-saving algorithms, the algorithm proposed in this work significantly reduces the energy consumption and maintains low service level agreement(SLA) violations.展开更多
A self-adaptive large neighborhood search method for scheduling n jobs on m non-identical parallel machines with mul- tiple time windows is presented. The problems' another feature lies in oversubscription, namely no...A self-adaptive large neighborhood search method for scheduling n jobs on m non-identical parallel machines with mul- tiple time windows is presented. The problems' another feature lies in oversubscription, namely not all jobs can be scheduled within specified scheduling horizons due to the limited machine capacity. The objective is thus to maximize the overall profits of processed jobs while respecting machine constraints. A first-in- first-out heuristic is applied to find an initial solution, and then a large neighborhood search procedure is employed to relax and re- optimize cumbersome solutions. A machine learning mechanism is also introduced to converge on the most efficient neighborhoods for the problem. Extensive computational results are presented based on data from an application involving the daily observation scheduling of a fleet of earth observing satellites. The method rapidly solves most problem instances to optimal or near optimal and shows a robust performance in sensitive analysis.展开更多
Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold l...Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold learning and particle swarm optimization support vector machine(PSO-SVM) is studied. This fault diagnosis model is used for a rolling bearing experimental of three kinds faults. The results are verified that this model based on multi-manifold learning and PSO-SVM is good at the fault sensitive features acquisition with effective accuracy.展开更多
Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very la...Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, for training SVM is introduted. The method is tested on UCI datasets.展开更多
A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is oppo...A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is opposite to that of naive 1-fold cross validation. As opposed to naive l-fold cross validation, fast l-fold cross validation takes the advantage in terms of computational time, especially for the large fold number such as l 〉 20. To corroborate the efficacy and feasibility of fast l-fold cross validation, experiments on five benchmark regression data sets are evaluated.展开更多
Suppliers' selection in supply chain management (SCM) has attracted considerable research interests in recent years. Recent literatures show that neural networks achieve better performance than traditional statisti...Suppliers' selection in supply chain management (SCM) has attracted considerable research interests in recent years. Recent literatures show that neural networks achieve better performance than traditional statistical methods. However, neural networks have inherent drawbacks, such as local optimization solution, lack generalization, and uncontrolled convergence. A relatively new machine learning technique, support vector machine (SVM), which overcomes the drawbacks of neural networks, is introduced to provide a model with better explanatory power to select ideal supplier partners. Meanwhile, in practice, the suppliers' samples are very insufficient. SVMs are adaptive to deal with small samples' training and testing. The prediction accuracies for BPNN and SVM methods are compared to choose the appreciating suppliers. The actual examples illustrate that SVM methods are superior to BPNN.展开更多
文摘Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planting system(HDPS)offers a viable method to enhance productivity by increasing plant populations per unit area,optimizing resource utilization,and facilitating machine picking.Cotton is an indeterminate plant that produce excessive vegeta-tive growth in favorable soil fertility and moisture conditions,which posing challenges for efficient machine picking.To address this issue,the application of plant growth retardants(PGRs)is essential for controlling canopy architecture.PGRs reduce internode elongation,promote regulated branching,and increase plant compactness,making cotton plants better suited for machine picking.PGRs application also optimizes photosynthates distribution between veg-etative and reproductive growth,resulting in higher yields and improved fibre quality.The integration of HDPS and PGRs applications results in an optimal plant architecture for improving machine picking efficiency.However,the success of this integration is determined by some factors,including cotton variety,environmental conditions,and geographical variations.These approaches not only address yield stagnation and labour shortages but also help to establish more effective and sustainable cotton farming practices,resulting in higher cotton productivity.
基金supported by the Guangdong Major Project of Basic and Applied Basic Research(Grant No.2021B0301030001)the National Key Research and Development Program of China(Grant No.2021YFB3802300)the Foundation of National Key Laboratory of Shock Wave and Detonation Physics(Grant No.JCKYS2022212004)。
文摘The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI.
基金supported by the Aeronautical Science Foundation of China(Grant No.20230018072011)。
文摘3-Nitro-1,2,4-triazol-5-one(NTO)is a typical high-energy,low-sensitivity explosive,and accurate concentration monitoring is critical for crystallization process control.In this study,a high-precision quantitative analytical model for NTO concentration in ethanol solutions was developed by integrating real-time ATR-FTIR spectroscopy with chemometric and machine learning techniques.Dynamic spectral data were obtained by designing multi-concentration gradient heating-cooling cycle experiments,abnormal samples were eliminated using the isolation forest algorithm,and the effects of various preprocessing methods on model performance were systematically evaluated.The results show that partial least squares regression(PLSR)exhibits superior generalization ability compared to other models.Vibrational bands corresponding to C=O and–NO_(2)were identified as key predictors for concentration estimation.This work provides an efficient and reliable solution for real-time concentration monitoring during NTO crystallization and holds significant potential for process analytical applications in energetic material manufacturing.
基金supported by Agricultural Science and Technology Innovation Program of Chinese Academy of Agricultural Science。
文摘Background The geo-traceability of cotton is crucial for ensuring the quality and integrity of cotton brands. However, effective methods for achieving this traceability are currently lacking. This study investigates the potential of explainable machine learning for the geo-traceability of raw cotton.Results The findings indicate that principal component analysis(PCA) exhibits limited effectiveness in tracing cotton origins. In contrast, partial least squares discriminant analysis(PLS-DA) demonstrates superior classification performance, identifying seven discriminating variables: Na, Mn, Ba, Rb, Al, As, and Pb. The use of decision tree(DT), support vector machine(SVM), and random forest(RF) models for origin discrimination yielded accuracies of 90%, 87%, and 97%, respectively. Notably, the light gradient boosting machine(Light GBM) model achieved perfect performance metrics, with accuracy, precision, and recall rate all reaching 100% on the test set. The output of the Light GBM model was further evaluated using the SHapley Additive ex Planation(SHAP) technique, which highlighted differences in the elemental composition of raw cotton from various countries. Specifically, the elements Pb, Ni, Na, Al, As, Ba, and Rb significantly influenced the model's predictions.Conclusion These findings suggest that explainable machine learning techniques can provide insights into the complex relationships between geographic information and raw cotton. Consequently, these methodologies enhances the precision and reliability of geographic traceability for raw cotton.
文摘Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency.
基金the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the project number"NBUFFMRA-2025-2461-09"。
文摘Blast-induced ground vibration,quantified by peak particle velocity(PPV),is a crucial factor in mitigating environmental and structural risks in mining and geotechnical engineering.Accurate PPV prediction facilitates safer and more sustainable blasting operations by minimizing adverse impacts and ensuring regulatory compliance.This study presents an advanced predictive framework integrating Cat Boost(CB)with nature-inspired optimization algorithms,including the Bat Algorithm(BAT),Sparrow Search Algorithm(SSA),Butterfly Optimization Algorithm(BOA),and Grasshopper Optimization Algorithm(GOA).A comprehensive dataset from the Sarcheshmeh Copper Mine in Iran was utilized to develop and evaluate these models using key performance metrics such as the Index of Agreement(IoA),Nash-Sutcliffe Efficiency(NSE),and the coefficient of determination(R^(2)).The hybrid CB-BOA model outperformed other approaches,achieving the highest accuracy(R^(2)=0.989)and the lowest prediction errors.SHAP analysis identified Distance(Di)as the most influential variable affecting PPV,while uncertainty analysis confirmed CB-BOA as the most reliable model,featuring the narrowest prediction interval.These findings highlight the effectiveness of hybrid machine learning models in refining PPV predictions,contributing to improved blast design strategies,enhanced structural safety,and reduced environmental impacts in mining and geotechnical engineering.
基金Project(2024JJ2074) supported by the Natural Science Foundation of Hunan Province,ChinaProject(22376221) supported by the National Natural Science Foundation of ChinaProject(2023QNRC001) supported by the Young Elite Scientists Sponsorship Program by CAST,China。
文摘Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金supported by National Natural Science Foundation of China(Grant No.12432018,12372346)the Innovative Research Groups of the National Natural Science Foundation of China(Grant No.12221002).
文摘A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,which reduces the risk of penetrating the bulkhead.In the realm of hypervelocity impact,strain rate(>10^(5)s^(-1))effects are negligible,and fluid dynamics is employed to describe the impact process.Efficient numerical tools for precisely predicting the damage degree can greatly accelerate the design and optimization of advanced protective structures.Current hypervelocity impact research primarily focuses on the interaction between projectile and front plate and the movement of debris cloud.However,the damage mechanism of debris cloud impacts on rear plates-the critical threat component-remains underexplored owing to complex multi-physics processes and prohibitive computational costs.Existing approaches,ranging from semi-empirical equations to a machine learningbased ballistic limit prediction method,are constrained to binary penetration classification.Alternatively,the uneven data from experiments and simulations caused these methods to be ineffective when the projectile has irregular shapes and complicate flight attitude.Therefore,it is urgent to develop a new damage prediction method for predicting the rear plate damage,which can help to gain a deeper understanding of the damage mechanism.In this study,a machine learning(ML)method is developed to predict the damage distribution in the rear plate.Based on the unit velocity space,the discretized information of debris cloud and rear plate damage from rare simulation cases is used as input data for training the ML models,while the generalization ability for damage distribution prediction is tested by other simulation cases with different attack angles.The results demonstrate that the training and prediction accuracies using the Random Forest(RF)algorithm significantly surpass those using Artificial Neural Networks(ANNs)and Support Vector Machine(SVM).The RF-based model effectively identifies damage features in sparsely distributed debris cloud and cumulative effect.This study establishes an expandable new dataset that accommodates additional parameters to improve the prediction accuracy.Results demonstrate the model's ability to overcome data imbalance limitations through debris cloud features,enabling rapid and accurate rear plate damage prediction across wider scenarios with minimal data requirements.
基金supported by the National Natural Science Foundation of China (60604021 60874054)
文摘To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of DTSVM highly depends on its structure, to cluster the multi-classes with maximum distance between the clustering centers of the two sub-classes, genetic algorithm is introduced into the formation of decision tree, so that the most separable classes would be separated at each node of decisions tree. Numerical simulations conducted on three datasets compared with "one-against-all" and "one-against-one" demonstrate the proposed method has better performance and higher generalization ability than the two conventional methods.
基金the National Nature Science Foundation of China (60775047, 60402024)
文摘The support vector machine (SVM) is a novel machine learning method, which has the ability to approximate nonlinear functions with arbitrary accuracy. Setting parameters well is very crucial for SVM learning results and generalization ability, and now there is no systematic, general method for parameter selection. In this article, the SVM parameter selection for function approximation is regarded as a compound optimization problem and a mutative scale chaos optimization algorithm is employed to search for optimal paraxneter values. The chaos optimization algorithm is an effective way for global optimal and the mutative scale chaos algorithm could improve the search efficiency and accuracy. Several simulation examples show the sensitivity of the SVM parameters and demonstrate the superiority of this proposed method for nonlinear function approximation.
基金Projects(61572525,61272148)supported by the National Natural Science Foundation of ChinaProject(20120162110061)supported by the PhD Programs Foundation of Ministry of Education of China+1 种基金Project(CX2014B066)supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044)supported by the Fundamental Research Funds for the Central Universities,China
文摘Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-based VM deployment algorithm for energy efficiency) is presented. The proposed algorithm uses linear weighted method to predict the load of a host and classifies the hosts in the data center, based on the predicted host load, into four classes for the purpose of VMs migration. We also propose four types of VM selection algorithms for the purpose of determining potential VMs to be migrated. We performed extensive performance analysis of the proposed algorithms. Experimental results show that, in contrast to other energy-saving algorithms, the algorithm proposed in this work significantly reduces the energy consumption and maintains low service level agreement(SLA) violations.
基金supported by the National Natural Science Foundation of China (7060103570801062)
文摘A self-adaptive large neighborhood search method for scheduling n jobs on m non-identical parallel machines with mul- tiple time windows is presented. The problems' another feature lies in oversubscription, namely not all jobs can be scheduled within specified scheduling horizons due to the limited machine capacity. The objective is thus to maximize the overall profits of processed jobs while respecting machine constraints. A first-in- first-out heuristic is applied to find an initial solution, and then a large neighborhood search procedure is employed to relax and re- optimize cumbersome solutions. A machine learning mechanism is also introduced to converge on the most efficient neighborhoods for the problem. Extensive computational results are presented based on data from an application involving the daily observation scheduling of a fleet of earth observing satellites. The method rapidly solves most problem instances to optimal or near optimal and shows a robust performance in sensitive analysis.
基金Beijing Natural Science Foundation(KZ201211232039)National Natural Science Foundation of China(51275052)+1 种基金Funding Project for Academic Human Resources Development in Institutions of Higher Learning under the Jurisdiction of Beijing Municipalipality(PHR201106132)PXM2014_014224_000080
文摘Fault diagnosis technology plays an important role in the industries due to the emergency fault of a machine could bring the heavy lost for the people and the company. A fault diagnosis model based on multi-manifold learning and particle swarm optimization support vector machine(PSO-SVM) is studied. This fault diagnosis model is used for a rolling bearing experimental of three kinds faults. The results are verified that this model based on multi-manifold learning and PSO-SVM is good at the fault sensitive features acquisition with effective accuracy.
文摘Support vector machine has become an increasingly popular tool for machine learning tasks involving classification, regression or novelty detection. Training a support vector machine requires the solution of a very large quadratic programming problem. Traditional optimization methods cannot be directly applied due to memory restrictions. Up to now, several approaches exist for circumventing the above shortcomings and work well. Another learning algorithm, particle swarm optimization, for training SVM is introduted. The method is tested on UCI datasets.
基金supported by the National Natural Science Foundation of China(51006052)the NUST Outstanding Scholar Supporting Program
文摘A method for fast 1-fold cross validation is proposed for the regularized extreme learning machine (RELM). The computational time of fast l-fold cross validation increases as the fold number decreases, which is opposite to that of naive 1-fold cross validation. As opposed to naive l-fold cross validation, fast l-fold cross validation takes the advantage in terms of computational time, especially for the large fold number such as l 〉 20. To corroborate the efficacy and feasibility of fast l-fold cross validation, experiments on five benchmark regression data sets are evaluated.
文摘Suppliers' selection in supply chain management (SCM) has attracted considerable research interests in recent years. Recent literatures show that neural networks achieve better performance than traditional statistical methods. However, neural networks have inherent drawbacks, such as local optimization solution, lack generalization, and uncontrolled convergence. A relatively new machine learning technique, support vector machine (SVM), which overcomes the drawbacks of neural networks, is introduced to provide a model with better explanatory power to select ideal supplier partners. Meanwhile, in practice, the suppliers' samples are very insufficient. SVMs are adaptive to deal with small samples' training and testing. The prediction accuracies for BPNN and SVM methods are compared to choose the appreciating suppliers. The actual examples illustrate that SVM methods are superior to BPNN.