To fully leverage the advantages of mechanization and informatization in tunnel boring machine(TBM)operations,the authors aim to promote the advancement of tunnel construction technology toward intelligent development...To fully leverage the advantages of mechanization and informatization in tunnel boring machine(TBM)operations,the authors aim to promote the advancement of tunnel construction technology toward intelligent development.This involved exploring the deep integration of next-generation artificial intelligence technologies,such as sensing technology,automatic control technology,big data technology,deep learning,and machine vision,with key operational processes,including TBM excavation,direction adjustment,step changes,inverted arch block assembly,material transportation,and operation status assurance.The results of this integration are summarized as follows.(1)TBM key excavation parameter prediction algorithm was developed with an accuracy rate exceeding 90%.The TBM intelligent step-change control algorithm,based on machine vision,achieved an image segmentation accuracy rate of 95%and gripper shoe positioning error of±5 mm.(2)An automatic positioning system for inverted arch blocks was developed,enabling real-time perception of the spatial position and deviation during the assembly process.The system maintains an elevation positioning deviation within±3 mm and a horizontal positioning deviation within±10 mm,reducing the number of surveyors in each work team.(3)A TBM intelligent rail transportation system that achieves real-time human-machine positioning,automatic switch opening and closing,automatic obstacle avoidance,intelligent transportation planning,and integrated scheduling and command was designed.Each locomotive formation reduces one shunter and improves comprehensive transportation efficiency by more than 20%.(4)Intelligent analysis and prediction algorithms were developed to monitor and predict the trends of the hydraulic and gear oil parameters in real time,enhancing the proactive maintenance and system reliability.展开更多
Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services...Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services is influenced by species diversity,tree health,and the distribution and the composition of trees.Traditionally,data on urban trees has been collected through field surveys and manual interpretation of remote sensing images.In this study,we evaluated the effectiveness of multispectral airborne laser scanning(ALS)data in classifying 24 common urban roadside tree species in Espoo,Finland.Tree crown structure information,intensity features,and spectral data were used for classification.Eight different machine learning algorithms were tested,with the extra trees(ET)algorithm performing the best,achieving an overall accuracy of 71.7%using multispectral LiDAR data.This result highlights that integrating structural and spectral information within a single framework can improve the classification accuracy.Future research will focus on identifying the most important features for species classification and developing algorithms with greater efficiency and accuracy.展开更多
Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planti...Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planting system(HDPS)offers a viable method to enhance productivity by increasing plant populations per unit area,optimizing resource utilization,and facilitating machine picking.Cotton is an indeterminate plant that produce excessive vegeta-tive growth in favorable soil fertility and moisture conditions,which posing challenges for efficient machine picking.To address this issue,the application of plant growth retardants(PGRs)is essential for controlling canopy architecture.PGRs reduce internode elongation,promote regulated branching,and increase plant compactness,making cotton plants better suited for machine picking.PGRs application also optimizes photosynthates distribution between veg-etative and reproductive growth,resulting in higher yields and improved fibre quality.The integration of HDPS and PGRs applications results in an optimal plant architecture for improving machine picking efficiency.However,the success of this integration is determined by some factors,including cotton variety,environmental conditions,and geographical variations.These approaches not only address yield stagnation and labour shortages but also help to establish more effective and sustainable cotton farming practices,resulting in higher cotton productivity.展开更多
Estimating trawler fishing effort plays a critical role in characterizing marine fisheries activities,quantifying the ecological impact of trawling,and refining regulatory frameworks and policies.Understanding trawler...Estimating trawler fishing effort plays a critical role in characterizing marine fisheries activities,quantifying the ecological impact of trawling,and refining regulatory frameworks and policies.Understanding trawler fishing inputs offers crucial scientific data to support the sustainable management of offshore fishery resources in China.An XGBoost algorithm was introduced and optimized through Harris Hawks Optimization(HHO),to develop a model for identifying trawler fishing behaviour.The model demonstrated exceptional performance,achieving accuracy,sensitivity,specificity,and the Matthews correlation coefficient of 0.9713,0.9806,0.9632,and 0.9425,respectively.Using this model to detect fishing activities,the fishing effort of trawlers from Shandong Province in the sea area between 119°E to 124°E and 32°N to 40°N in 2021 was quantified.A heatmap depicting fishing effort,generated with a spatial resolution of 1/8°,revealed that fishing activities were predominantly concentrated in two regions:121.1°E to 124°E,35.7°N to 38.7°N,and 119.8°E to 122.8°E,33.6°N to 35.4°N.This research can provide a foundation for quantitative evaluations of fishery resources,which can offer vital data to promote the sustainable development of marine capture fisheries.展开更多
The development of sustainable electrode materials for energy storage systems has become very important and porous carbons derived from biomass have become an important candidate because of their tunable pore structur...The development of sustainable electrode materials for energy storage systems has become very important and porous carbons derived from biomass have become an important candidate because of their tunable pore structure,environmental friendliness,and cost-effectiveness.Recent advances in controlling the pore structure of these carbons and its relationship between to is energy storage performance are discussed,emphasizing the critical role of a balanced distribution of micropores,mesopores and macropores in determining electrochemical behavior.Particular attention is given to how the intrinsic components of biomass precursors(lignin,cellulose,and hemicellulose)influence pore formation during carbonization.Carbonization and activation strategies to precisely control the pore structure are introduced.Finally,key challenges in the industrial production of these carbons are outlined,and future research directions are proposed.These include the establishment of a database of biomass intrinsic structures and machine learning-assisted pore structure engineering,aimed at providing guidance for the design of high-performance carbon materials for next-generation energy storage devices.展开更多
The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can caus...The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can cause changes in cutting force/heat,resulting in affecting gear machining precision.Therefore,this paper studies the effect of different process parameters on gear machining precision.A multi-objective optimization model is established for the relationship between process parameters and tooth surface deviations,tooth profile deviations,and tooth lead deviations through the cutting speed,feed rate,and cutting depth of the worm wheel gear grinding machine.The response surface method(RSM)is used for experimental design,and the corresponding experimental results and optimal process parameters are obtained.Subsequently,gray relational analysis-principal component analysis(GRA-PCA),particle swarm optimization(PSO),and genetic algorithm-particle swarm optimization(GA-PSO)methods are used to analyze the experimental results and obtain different optimal process parameters.The results show that optimal process parameters obtained by the GRA-PCA,PSO,and GA-PSO methods improve the gear machining precision.Moreover,the gear machining precision obtained by GA-PSO is superior to other methods.展开更多
Causality,the science of cause and effect,has made it possible to create a new family of models.Such models are often referred to as causal models.Unlike those of mathematical,numerical,empirical,or machine learning(M...Causality,the science of cause and effect,has made it possible to create a new family of models.Such models are often referred to as causal models.Unlike those of mathematical,numerical,empirical,or machine learning(ML)nature,causal models hope to tie the cause(s)to the effect(s)pertaining to a phenomenon(i.e.,data generating process)through causal principles.This paper presents one of the first works at creating causal models in the area of structural and construction engineering.To this end,this paper starts with a brief review of the principles of causality and then adopts four causal discovery algorithms,namely,PC(Peter-Clark),FCI(fast causal inference),GES(greedy equivalence search),and GRa SP(greedy relaxation of the sparsest permutation),have been used to examine four phenomena,including predicting the load-bearing capacity of axially loaded members,fire resistance of structural members,shear strength of beams,and resistance of walls against impulsive(blast)loading.Findings from this study reveal the possibility and merit of discovering complete and partial causal models.Finally,this study also proposes two simple metrics that can help assess the performance of causal discovery algorithms.展开更多
Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is inf...Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
Taoren and Xingren are commonly used herbs in East Asian medicine with different medication functions but huge economic differences,and there are cases of adulterated sales in market transactions.An effective adultera...Taoren and Xingren are commonly used herbs in East Asian medicine with different medication functions but huge economic differences,and there are cases of adulterated sales in market transactions.An effective adulteration recognition based on hyperspectral technology and machine learning was designed as a non-destructive testing method in this paper.A hyperspectral dataset comprising 500 Taoren and 500 Xingren samples was established;six feature selection methods were considered in the modeling of radial basis function-support vector machine(RBF-SVM),whose interaction between the two optimization methods was further researched.Two mixed metaheuristics modeling methods,Mixed-PSO and Mixed-SA,were designed,which fused both band selection and hyperparameter optimization from two-stage into one with detailed process analysis.The metrics of this mixed model were improved by comparing with traditional two-stage method.The accuracy of Mixed-PSO was 89.2%in five-floods crossvalidation that increased 4.818%than vanilla RBF-SVM;the accuracy of Mixed-SA was 88.7%which could reach the same as the traditional two-stage method,but it only relied on 48 crux bands in full 100 bands in RBF-SVM model fitting.展开更多
As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely...As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.展开更多
The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to ...The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI.展开更多
The ammunition loading system manipulator is susceptible to gear failure due to high-frequency,heavyload reciprocating motions and the absence of protective gear components.After a fault occurs,the distribution of fau...The ammunition loading system manipulator is susceptible to gear failure due to high-frequency,heavyload reciprocating motions and the absence of protective gear components.After a fault occurs,the distribution of fault characteristics under different loads is markedly inconsistent,and data is hard to label,which makes it difficult for the traditional diagnosis method based on single-condition training to generalize to different conditions.To address these issues,the paper proposes a novel transfer discriminant neural network(TDNN)for gear fault diagnosis.Specifically,an optimized joint distribution adaptive mechanism(OJDA)is designed to solve the distribution alignment problem between two domains.To improve the classification effect within the domain and the feature recognition capability for a few labeled data,metric learning is introduced to distinguish features from different fault categories.In addition,TDNN adopts a new pseudo-label training strategy to achieve label replacement by comparing the maximum probability of the pseudo-label with the test result.The proposed TDNN is verified in the experimental data set of the artillery manipulator device,and the diagnosis can achieve 99.5%,significantly outperforming other traditional adaptation methods.展开更多
This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key de...This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.展开更多
A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,wh...A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,which reduces the risk of penetrating the bulkhead.In the realm of hypervelocity impact,strain rate(>10^(5)s^(-1))effects are negligible,and fluid dynamics is employed to describe the impact process.Efficient numerical tools for precisely predicting the damage degree can greatly accelerate the design and optimization of advanced protective structures.Current hypervelocity impact research primarily focuses on the interaction between projectile and front plate and the movement of debris cloud.However,the damage mechanism of debris cloud impacts on rear plates-the critical threat component-remains underexplored owing to complex multi-physics processes and prohibitive computational costs.Existing approaches,ranging from semi-empirical equations to a machine learningbased ballistic limit prediction method,are constrained to binary penetration classification.Alternatively,the uneven data from experiments and simulations caused these methods to be ineffective when the projectile has irregular shapes and complicate flight attitude.Therefore,it is urgent to develop a new damage prediction method for predicting the rear plate damage,which can help to gain a deeper understanding of the damage mechanism.In this study,a machine learning(ML)method is developed to predict the damage distribution in the rear plate.Based on the unit velocity space,the discretized information of debris cloud and rear plate damage from rare simulation cases is used as input data for training the ML models,while the generalization ability for damage distribution prediction is tested by other simulation cases with different attack angles.The results demonstrate that the training and prediction accuracies using the Random Forest(RF)algorithm significantly surpass those using Artificial Neural Networks(ANNs)and Support Vector Machine(SVM).The RF-based model effectively identifies damage features in sparsely distributed debris cloud and cumulative effect.This study establishes an expandable new dataset that accommodates additional parameters to improve the prediction accuracy.Results demonstrate the model's ability to overcome data imbalance limitations through debris cloud features,enabling rapid and accurate rear plate damage prediction across wider scenarios with minimal data requirements.展开更多
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode...Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.展开更多
The paper considers application of artificial neural networks(ANNs)for fast numerical evaluation of a residual impactor velocity for a family of perforated PMMA(Polymethylmethacrylate)targets.The ANN models were train...The paper considers application of artificial neural networks(ANNs)for fast numerical evaluation of a residual impactor velocity for a family of perforated PMMA(Polymethylmethacrylate)targets.The ANN models were trained using sets of numerical results on impact of PMMA plates obtained via dynamic FEM coupled with incubation time fracture criterion.The developed approach makes it possible to evaluate the impact strength of a particular target configuration without complicated FEM calculations which require considerable computational resources.Moreover,it is shown that the ANN models are able to predict results for the configurations which cannot be processed using the developed FEM routine due to numerical instabilities and errors:the trained neural network uses information from successful computations to obtain results for the problematic cases.A simple static problem of a perforated plate deformation is discussed prior to the impact problem and preferable ANN architectures are presented for both problems.Some insight into the perforation pattern optimization using a genetic algorithm coupled with the ANN is also made and optimized perforation patterns which theoretically enhance the target impact strength are constructed.展开更多
文摘To fully leverage the advantages of mechanization and informatization in tunnel boring machine(TBM)operations,the authors aim to promote the advancement of tunnel construction technology toward intelligent development.This involved exploring the deep integration of next-generation artificial intelligence technologies,such as sensing technology,automatic control technology,big data technology,deep learning,and machine vision,with key operational processes,including TBM excavation,direction adjustment,step changes,inverted arch block assembly,material transportation,and operation status assurance.The results of this integration are summarized as follows.(1)TBM key excavation parameter prediction algorithm was developed with an accuracy rate exceeding 90%.The TBM intelligent step-change control algorithm,based on machine vision,achieved an image segmentation accuracy rate of 95%and gripper shoe positioning error of±5 mm.(2)An automatic positioning system for inverted arch blocks was developed,enabling real-time perception of the spatial position and deviation during the assembly process.The system maintains an elevation positioning deviation within±3 mm and a horizontal positioning deviation within±10 mm,reducing the number of surveyors in each work team.(3)A TBM intelligent rail transportation system that achieves real-time human-machine positioning,automatic switch opening and closing,automatic obstacle avoidance,intelligent transportation planning,and integrated scheduling and command was designed.Each locomotive formation reduces one shunter and improves comprehensive transportation efficiency by more than 20%.(4)Intelligent analysis and prediction algorithms were developed to monitor and predict the trends of the hydraulic and gear oil parameters in real time,enhancing the proactive maintenance and system reliability.
文摘Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services is influenced by species diversity,tree health,and the distribution and the composition of trees.Traditionally,data on urban trees has been collected through field surveys and manual interpretation of remote sensing images.In this study,we evaluated the effectiveness of multispectral airborne laser scanning(ALS)data in classifying 24 common urban roadside tree species in Espoo,Finland.Tree crown structure information,intensity features,and spectral data were used for classification.Eight different machine learning algorithms were tested,with the extra trees(ET)algorithm performing the best,achieving an overall accuracy of 71.7%using multispectral LiDAR data.This result highlights that integrating structural and spectral information within a single framework can improve the classification accuracy.Future research will focus on identifying the most important features for species classification and developing algorithms with greater efficiency and accuracy.
文摘Machine picking in cotton is an emerging practice in India,to solve the problems of labour shortages and production costs increasing.Cotton production has been declining in recent years;however,the high density planting system(HDPS)offers a viable method to enhance productivity by increasing plant populations per unit area,optimizing resource utilization,and facilitating machine picking.Cotton is an indeterminate plant that produce excessive vegeta-tive growth in favorable soil fertility and moisture conditions,which posing challenges for efficient machine picking.To address this issue,the application of plant growth retardants(PGRs)is essential for controlling canopy architecture.PGRs reduce internode elongation,promote regulated branching,and increase plant compactness,making cotton plants better suited for machine picking.PGRs application also optimizes photosynthates distribution between veg-etative and reproductive growth,resulting in higher yields and improved fibre quality.The integration of HDPS and PGRs applications results in an optimal plant architecture for improving machine picking efficiency.However,the success of this integration is determined by some factors,including cotton variety,environmental conditions,and geographical variations.These approaches not only address yield stagnation and labour shortages but also help to establish more effective and sustainable cotton farming practices,resulting in higher cotton productivity.
文摘Estimating trawler fishing effort plays a critical role in characterizing marine fisheries activities,quantifying the ecological impact of trawling,and refining regulatory frameworks and policies.Understanding trawler fishing inputs offers crucial scientific data to support the sustainable management of offshore fishery resources in China.An XGBoost algorithm was introduced and optimized through Harris Hawks Optimization(HHO),to develop a model for identifying trawler fishing behaviour.The model demonstrated exceptional performance,achieving accuracy,sensitivity,specificity,and the Matthews correlation coefficient of 0.9713,0.9806,0.9632,and 0.9425,respectively.Using this model to detect fishing activities,the fishing effort of trawlers from Shandong Province in the sea area between 119°E to 124°E and 32°N to 40°N in 2021 was quantified.A heatmap depicting fishing effort,generated with a spatial resolution of 1/8°,revealed that fishing activities were predominantly concentrated in two regions:121.1°E to 124°E,35.7°N to 38.7°N,and 119.8°E to 122.8°E,33.6°N to 35.4°N.This research can provide a foundation for quantitative evaluations of fishery resources,which can offer vital data to promote the sustainable development of marine capture fisheries.
文摘The development of sustainable electrode materials for energy storage systems has become very important and porous carbons derived from biomass have become an important candidate because of their tunable pore structure,environmental friendliness,and cost-effectiveness.Recent advances in controlling the pore structure of these carbons and its relationship between to is energy storage performance are discussed,emphasizing the critical role of a balanced distribution of micropores,mesopores and macropores in determining electrochemical behavior.Particular attention is given to how the intrinsic components of biomass precursors(lignin,cellulose,and hemicellulose)influence pore formation during carbonization.Carbonization and activation strategies to precisely control the pore structure are introduced.Finally,key challenges in the industrial production of these carbons are outlined,and future research directions are proposed.These include the establishment of a database of biomass intrinsic structures and machine learning-assisted pore structure engineering,aimed at providing guidance for the design of high-performance carbon materials for next-generation energy storage devices.
基金Projects(U22B2084,52275483,52075142)supported by the National Natural Science Foundation of ChinaProject(2023ZY01050)supported by the Ministry of Industry and Information Technology High Quality Development,China。
文摘The gears of new energy vehicles are required to withstand higher rotational speeds and greater loads,which puts forward higher precision essentials for gear manufacturing.However,machining process parameters can cause changes in cutting force/heat,resulting in affecting gear machining precision.Therefore,this paper studies the effect of different process parameters on gear machining precision.A multi-objective optimization model is established for the relationship between process parameters and tooth surface deviations,tooth profile deviations,and tooth lead deviations through the cutting speed,feed rate,and cutting depth of the worm wheel gear grinding machine.The response surface method(RSM)is used for experimental design,and the corresponding experimental results and optimal process parameters are obtained.Subsequently,gray relational analysis-principal component analysis(GRA-PCA),particle swarm optimization(PSO),and genetic algorithm-particle swarm optimization(GA-PSO)methods are used to analyze the experimental results and obtain different optimal process parameters.The results show that optimal process parameters obtained by the GRA-PCA,PSO,and GA-PSO methods improve the gear machining precision.Moreover,the gear machining precision obtained by GA-PSO is superior to other methods.
文摘Causality,the science of cause and effect,has made it possible to create a new family of models.Such models are often referred to as causal models.Unlike those of mathematical,numerical,empirical,or machine learning(ML)nature,causal models hope to tie the cause(s)to the effect(s)pertaining to a phenomenon(i.e.,data generating process)through causal principles.This paper presents one of the first works at creating causal models in the area of structural and construction engineering.To this end,this paper starts with a brief review of the principles of causality and then adopts four causal discovery algorithms,namely,PC(Peter-Clark),FCI(fast causal inference),GES(greedy equivalence search),and GRa SP(greedy relaxation of the sparsest permutation),have been used to examine four phenomena,including predicting the load-bearing capacity of axially loaded members,fire resistance of structural members,shear strength of beams,and resistance of walls against impulsive(blast)loading.Findings from this study reveal the possibility and merit of discovering complete and partial causal models.Finally,this study also proposes two simple metrics that can help assess the performance of causal discovery algorithms.
文摘Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金Supported by the Natural Science Foundation of Heilongjiang Province(LH2020C003)。
文摘Taoren and Xingren are commonly used herbs in East Asian medicine with different medication functions but huge economic differences,and there are cases of adulterated sales in market transactions.An effective adulteration recognition based on hyperspectral technology and machine learning was designed as a non-destructive testing method in this paper.A hyperspectral dataset comprising 500 Taoren and 500 Xingren samples was established;six feature selection methods were considered in the modeling of radial basis function-support vector machine(RBF-SVM),whose interaction between the two optimization methods was further researched.Two mixed metaheuristics modeling methods,Mixed-PSO and Mixed-SA,were designed,which fused both band selection and hyperparameter optimization from two-stage into one with detailed process analysis.The metrics of this mixed model were improved by comparing with traditional two-stage method.The accuracy of Mixed-PSO was 89.2%in five-floods crossvalidation that increased 4.818%than vanilla RBF-SVM;the accuracy of Mixed-SA was 88.7%which could reach the same as the traditional two-stage method,but it only relied on 48 crux bands in full 100 bands in RBF-SVM model fitting.
基金supported by the National Natural Science Foundation of China(62375013).
文摘As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.
基金supported by the Guangdong Major Project of Basic and Applied Basic Research(Grant No.2021B0301030001)the National Key Research and Development Program of China(Grant No.2021YFB3802300)the Foundation of National Key Laboratory of Shock Wave and Detonation Physics(Grant No.JCKYS2022212004)。
文摘The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI.
文摘The ammunition loading system manipulator is susceptible to gear failure due to high-frequency,heavyload reciprocating motions and the absence of protective gear components.After a fault occurs,the distribution of fault characteristics under different loads is markedly inconsistent,and data is hard to label,which makes it difficult for the traditional diagnosis method based on single-condition training to generalize to different conditions.To address these issues,the paper proposes a novel transfer discriminant neural network(TDNN)for gear fault diagnosis.Specifically,an optimized joint distribution adaptive mechanism(OJDA)is designed to solve the distribution alignment problem between two domains.To improve the classification effect within the domain and the feature recognition capability for a few labeled data,metric learning is introduced to distinguish features from different fault categories.In addition,TDNN adopts a new pseudo-label training strategy to achieve label replacement by comparing the maximum probability of the pseudo-label with the test result.The proposed TDNN is verified in the experimental data set of the artillery manipulator device,and the diagnosis can achieve 99.5%,significantly outperforming other traditional adaptation methods.
基金supported by Poongsan-KAIST Future Research Center Projectthe fund support provided by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(Grant No.2023R1A2C2005661)。
文摘This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.
基金supported by National Natural Science Foundation of China(Grant No.12432018,12372346)the Innovative Research Groups of the National Natural Science Foundation of China(Grant No.12221002).
文摘A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,which reduces the risk of penetrating the bulkhead.In the realm of hypervelocity impact,strain rate(>10^(5)s^(-1))effects are negligible,and fluid dynamics is employed to describe the impact process.Efficient numerical tools for precisely predicting the damage degree can greatly accelerate the design and optimization of advanced protective structures.Current hypervelocity impact research primarily focuses on the interaction between projectile and front plate and the movement of debris cloud.However,the damage mechanism of debris cloud impacts on rear plates-the critical threat component-remains underexplored owing to complex multi-physics processes and prohibitive computational costs.Existing approaches,ranging from semi-empirical equations to a machine learningbased ballistic limit prediction method,are constrained to binary penetration classification.Alternatively,the uneven data from experiments and simulations caused these methods to be ineffective when the projectile has irregular shapes and complicate flight attitude.Therefore,it is urgent to develop a new damage prediction method for predicting the rear plate damage,which can help to gain a deeper understanding of the damage mechanism.In this study,a machine learning(ML)method is developed to predict the damage distribution in the rear plate.Based on the unit velocity space,the discretized information of debris cloud and rear plate damage from rare simulation cases is used as input data for training the ML models,while the generalization ability for damage distribution prediction is tested by other simulation cases with different attack angles.The results demonstrate that the training and prediction accuracies using the Random Forest(RF)algorithm significantly surpass those using Artificial Neural Networks(ANNs)and Support Vector Machine(SVM).The RF-based model effectively identifies damage features in sparsely distributed debris cloud and cumulative effect.This study establishes an expandable new dataset that accommodates additional parameters to improve the prediction accuracy.Results demonstrate the model's ability to overcome data imbalance limitations through debris cloud features,enabling rapid and accurate rear plate damage prediction across wider scenarios with minimal data requirements.
文摘Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.
基金Russian Science Foundation[grant number 22-71-10019].
文摘The paper considers application of artificial neural networks(ANNs)for fast numerical evaluation of a residual impactor velocity for a family of perforated PMMA(Polymethylmethacrylate)targets.The ANN models were trained using sets of numerical results on impact of PMMA plates obtained via dynamic FEM coupled with incubation time fracture criterion.The developed approach makes it possible to evaluate the impact strength of a particular target configuration without complicated FEM calculations which require considerable computational resources.Moreover,it is shown that the ANN models are able to predict results for the configurations which cannot be processed using the developed FEM routine due to numerical instabilities and errors:the trained neural network uses information from successful computations to obtain results for the problematic cases.A simple static problem of a perforated plate deformation is discussed prior to the impact problem and preferable ANN architectures are presented for both problems.Some insight into the perforation pattern optimization using a genetic algorithm coupled with the ANN is also made and optimized perforation patterns which theoretically enhance the target impact strength are constructed.