This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hype...This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller.展开更多
This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key de...This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.展开更多
[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-base...[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management.展开更多
Rock burst is a kind of geological disaster in rock excavation of high stress areas.To evaluate intensity of rock burst,the maximum shear stress,uniaxial compressive strength,uniaxial tensile strength and rock elastic...Rock burst is a kind of geological disaster in rock excavation of high stress areas.To evaluate intensity of rock burst,the maximum shear stress,uniaxial compressive strength,uniaxial tensile strength and rock elastic energy index were selected as input factors,and burst pit depth as output factor.The rock burst prediction model was proposed according to the genetic algorithms and extreme learning machine.The effect of structural surface was taken into consideration.Based on the engineering examples of tunnels,the observed and collected data were divided into the training set,validation set and prediction set.The training set and validation set were used to train and optimize the model.Parameter optimization results are presented.The hidden layer node was450,and the fitness of the predictions was 0.0197 under the optimal combination of the input weight and offset vector.Then,the optimized model is tested with the prediction set.Results show that the proposed model is effective.The maximum relative error is4.71%,and the average relative error is 3.20%,which proves that the model has practical value in the relative engineering.展开更多
Target maneuver trajectory prediction is an important prerequisite for air combat situation awareness and maneuver decision-making.However,how to use a large amount of trajectory data generated by air combat confronta...Target maneuver trajectory prediction is an important prerequisite for air combat situation awareness and maneuver decision-making.However,how to use a large amount of trajectory data generated by air combat confrontation training to achieve real-time and accurate prediction of target maneuver trajectory is an urgent problem to be solved.To solve this problem,in this paper,a hybrid algorithm based on transfer learning,online learning,ensemble learning,regularization technology,target maneuvering segmentation point recognition algorithm,and Volterra series,abbreviated as AERTrOS-Volterra is proposed.Firstly,the model makes full use of a large number of trajectory sample data generated by air combat confrontation training,and constructs a Tr-Volterra algorithm framework suitable for air combat target maneuver trajectory prediction,which realizes the extraction of effective information from the historical trajectory data.Secondly,in order to improve the real-time online prediction accuracy and robustness of the prediction model in complex electromagnetic environments,on the basis of the TrVolterra algorithm framework,a robust regularized online Sequential Volterra prediction model is proposed by integrating online learning method,regularization technology and inverse weighting calculation method based on the priori error.Finally,inspired by the preferable performance of models ensemble,ensemble learning scheme is also incorporated into our proposed algorithm,which adaptively updates the ensemble prediction model according to the performance of the model on real-time samples and the recognition results of target maneuvering segmentation points,including the adaptation of model weights;adaptation of parameters;and dynamic inclusion and removal of models.Compared with many existing time series prediction methods,the newly proposed target maneuver trajectory prediction algorithm can fully mine the prior knowledge contained in the historical data to assist the current prediction.The rationality and effectiveness of the proposed algorithm are verified by simulation on three sets of chaotic time series data sets and a set of real target maneuver trajectory data sets.展开更多
When detecting deletions in complex human genomes,split-read approaches using short reads generated with next-generation sequencing still face the challenge that either false discovery rate is high,or sensitivity is l...When detecting deletions in complex human genomes,split-read approaches using short reads generated with next-generation sequencing still face the challenge that either false discovery rate is high,or sensitivity is low.To address the problem,an integrated strategy is proposed.It organically combines the fundamental theories of the three mainstream methods(read-pair approaches,split-read technologies and read-depth analysis) with modern machine learning algorithms,using the recipe of feature extraction as a bridge.Compared with the state-of-art split-read methods for deletion detection in both low and high sequence coverage,the machine-learning-aided strategy shows great ability in intelligently balancing sensitivity and false discovery rate and getting a both more sensitive and more precise call set at single-base-pair resolution.Thus,users do not need to rely on former experience to make an unnecessary trade-off beforehand and adjust parameters over and over again any more.It should be noted that modern machine learning models can play an important role in the field of structural variation prediction.展开更多
Ship motions induced by waves have a significant impact on the efficiency and safety of offshore operations.Real-time prediction of ship motions in the next few seconds plays a crucial role in performing sensitive act...Ship motions induced by waves have a significant impact on the efficiency and safety of offshore operations.Real-time prediction of ship motions in the next few seconds plays a crucial role in performing sensitive activities.However,the obvious memory effect of ship motion time series brings certain difficulty to rapid and accurate prediction.Therefore,a real-time framework based on the Long-Short Term Memory(LSTM)neural network model is proposed to predict ship motions in regular and irregular head waves.A 15000 TEU container ship model is employed to illustrate the proposed framework.The numerical implementation and the real-time ship motion prediction in irregular head waves corresponding to the different time scales are carried out based on the container ship model.The related experimental data were employed to verify the numerical simulation results.The results show that the proposed method is more robust than the classical extreme short-term prediction method based on potential flow theory in the prediction of nonlinear ship motions.展开更多
Batch to batch temperature control of a semi-batch chemical reactor with heating/cooling system was discussed in this study. Without extensive modeling investigations, a two-dimensional(2D) general predictive iterativ...Batch to batch temperature control of a semi-batch chemical reactor with heating/cooling system was discussed in this study. Without extensive modeling investigations, a two-dimensional(2D) general predictive iterative learning control(2D-MGPILC) strategy based on the multi-model with time-varying weights was introduced for optimizing the tracking performance of desired temperature profile. This strategy was modeled based on an iterative learning control(ILC) algorithm for a 2D system and designed in the generalized predictive control(GPC) framework. Firstly, a multi-model structure with time-varying weights was developed to describe the complex operation of a general semi-batch reactor. Secondly, the 2 D-MGPILC algorithm was proposed to optimize simultaneously the dynamic performance along the time and batch axes. Finally, simulation for the controller design of a semi-batch reactor with multiple reactions was involved to demonstrate that the satisfactory performance could be achieved despite of the repetitive or non-repetitive disturbances.展开更多
This paper studied a supervisory control system for a hybrid off-highway electric vehicle under the chargesustaining(CS)condition.A new predictive double Q-learning with backup models(PDQL)scheme is proposed to optimi...This paper studied a supervisory control system for a hybrid off-highway electric vehicle under the chargesustaining(CS)condition.A new predictive double Q-learning with backup models(PDQL)scheme is proposed to optimize the engine fuel in real-world driving and improve energy efficiency with a faster and more robust learning process.Unlike the existing“model-free”methods,which solely follow on-policy and off-policy to update knowledge bases(Q-tables),the PDQL is developed with the capability to merge both on-policy and off-policy learning by introducing a backup model(Q-table).Experimental evaluations are conducted based on software-in-the-loop(SiL)and hardware-in-the-loop(HiL)test platforms based on real-time modelling of the studied vehicle.Compared to the standard double Q-learning(SDQL),the PDQL only needs half of the learning iterations to achieve better energy efficiency than the SDQL at the end learning process.In the SiL under 35 rounds of learning,the results show that the PDQL can improve the vehicle energy efficiency by 1.75%higher than SDQL.By implementing the PDQL in HiL under four predefined real-world conditions,the PDQL can robustly save more than 5.03%energy than the SDQL scheme.展开更多
A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,wh...A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,which reduces the risk of penetrating the bulkhead.In the realm of hypervelocity impact,strain rate(>10^(5)s^(-1))effects are negligible,and fluid dynamics is employed to describe the impact process.Efficient numerical tools for precisely predicting the damage degree can greatly accelerate the design and optimization of advanced protective structures.Current hypervelocity impact research primarily focuses on the interaction between projectile and front plate and the movement of debris cloud.However,the damage mechanism of debris cloud impacts on rear plates-the critical threat component-remains underexplored owing to complex multi-physics processes and prohibitive computational costs.Existing approaches,ranging from semi-empirical equations to a machine learningbased ballistic limit prediction method,are constrained to binary penetration classification.Alternatively,the uneven data from experiments and simulations caused these methods to be ineffective when the projectile has irregular shapes and complicate flight attitude.Therefore,it is urgent to develop a new damage prediction method for predicting the rear plate damage,which can help to gain a deeper understanding of the damage mechanism.In this study,a machine learning(ML)method is developed to predict the damage distribution in the rear plate.Based on the unit velocity space,the discretized information of debris cloud and rear plate damage from rare simulation cases is used as input data for training the ML models,while the generalization ability for damage distribution prediction is tested by other simulation cases with different attack angles.The results demonstrate that the training and prediction accuracies using the Random Forest(RF)algorithm significantly surpass those using Artificial Neural Networks(ANNs)and Support Vector Machine(SVM).The RF-based model effectively identifies damage features in sparsely distributed debris cloud and cumulative effect.This study establishes an expandable new dataset that accommodates additional parameters to improve the prediction accuracy.Results demonstrate the model's ability to overcome data imbalance limitations through debris cloud features,enabling rapid and accurate rear plate damage prediction across wider scenarios with minimal data requirements.展开更多
Rockburst prediction is of vital significance to the design and construction of underground hard rock mines.A rockburst database consisting of 102 case histories,i.e.,1998−2011 period data from 14 hard rock mines was ...Rockburst prediction is of vital significance to the design and construction of underground hard rock mines.A rockburst database consisting of 102 case histories,i.e.,1998−2011 period data from 14 hard rock mines was examined for rockburst prediction in burst-prone mines by three tree-based ensemble methods.The dataset was examined with six widely accepted indices which are:the maximum tangential stress around the excavation boundary(MTS),uniaxial compressive strength(UCS)and uniaxial tensile strength(UTS)of the intact rock,stress concentration factor(SCF),rock brittleness index(BI),and strain energy storage index(EEI).Two boosting(AdaBoost.M1,SAMME)and bagging algorithms with classification trees as baseline classifier on ability to learn rockburst were evaluated.The available dataset was randomly divided into training set(2/3 of whole datasets)and testing set(the remaining datasets).Repeated 10-fold cross validation(CV)was applied as the validation method for tuning the hyper-parameters.The margin analysis and the variable relative importance were employed to analyze some characteristics of the ensembles.According to 10-fold CV,the accuracy analysis of rockburst dataset demonstrated that the best prediction method for the potential of rockburst is bagging when compared to AdaBoost.M1,SAMME algorithms and empirical criteria methods.展开更多
Based on the complex correlation between the geochemical element distribution patterns at the surface and the types of bedrock and the powerful capabilities in capturing subtle of machine learning algorithms,four mach...Based on the complex correlation between the geochemical element distribution patterns at the surface and the types of bedrock and the powerful capabilities in capturing subtle of machine learning algorithms,four machine learning algorithms,namely,decision tree(DT),random forest(RF),XGBoost(XGB),and LightGBM(LGBM),were implemented for the lithostratigraphic classification and lithostratigraphic prediction of a quaternary coverage area based on stream sediment geochemical sampling data in the Chahanwusu River of Dulan County,Qinghai Province,China.The local Moran’s I to represent the features of spatial autocorrelations,and terrain factors to represent the features of surface geological processes,were calculated as additional features.The accuracy,precision,recall,and F1 scores were chosen as the evaluation indices and Voronoi diagrams were applied for visualization.The results indicate that XGB and LGBM models both performed well.They not only obtained relatively satisfactory classification performance but also predicted lithostratigraphic types of the Quaternary coverage area that are essentially consistent with their neighborhoods which have the known types.It is feasible to classify the lithostratigraphic types through the concentrations of geochemical elements in the sediments,and the XGB and LGBM algorithms are recommended for lithostratigraphic classification.展开更多
Deficiencies of applying the traditional least squares support vector machine (LS-SVM) to time series online prediction were specified. According to the kernel function matrix's property and using the recursive cal...Deficiencies of applying the traditional least squares support vector machine (LS-SVM) to time series online prediction were specified. According to the kernel function matrix's property and using the recursive calculation of block matrix, a new time series online prediction algorithm based on improved LS-SVM was proposed. The historical training results were fully utilized and the computing speed of LS-SVM was enhanced. Then, the improved algorithm was applied to timc series online prediction. Based on the operational data provided by the Northwest Power Grid of China, the method was used in the transient stability prediction of electric power system. The results show that, compared with the calculation time of the traditional LS-SVM(75 1 600 ms), that of the proposed method in different time windows is 40-60 ms, proposed method is above 0.8. So the improved method is online prediction. and the prediction accuracy(normalized root mean squared error) of the better than the traditional LS-SVM and more suitable for time series online prediction.展开更多
Short-term travel flow prediction has been the core of the intelligent transport systems(ITS). An advanced method based on fuzzy C-means(FCM) and extreme learning machine(ELM) has been discussed by analyzing predictio...Short-term travel flow prediction has been the core of the intelligent transport systems(ITS). An advanced method based on fuzzy C-means(FCM) and extreme learning machine(ELM) has been discussed by analyzing prediction model. First, this model takes advantages of ability to adapt to nonlinear systems and the fast speed of ELM algorithm. Second, with FCM-clustering function, this novel model can get the clusters and the membership in the same cluster, which means that the associated observation points have been chosen. Therefore, the spatial relations can be used by giving the weight to every observation points when the model trains and tests the ELM. Third, by analyzing the actual data in Haining City in 2016, the feasibility and advantages of FCM-ELM prediction model have been shown when compared with other prediction algorithms.展开更多
Destination prediction has attracted widespread attention because it can help vehicle-aid systems recommend related services in advance to improve user driving experience.However,the relevant research is mainly based ...Destination prediction has attracted widespread attention because it can help vehicle-aid systems recommend related services in advance to improve user driving experience.However,the relevant research is mainly based on driving trajectory of vehicles to predict the destinations,which is challenging to achieve the early destination prediction.To this end,we propose a model of early destination prediction,DP-BPR,to predict the destinations by users’travel time and locations.There are three challenges to accomplish the model:1)the extremely sparse historical data make it challenge to predict destinations directly from raw historical data;2)the destinations are related to not only departure points but also departure time so that both of them should be taken into consideration in prediction;3)how to learn destination preferences from historical data.To deal with these challenges,we map sparse high-dimensional data to a dense low-dimensional space through embedding learning using deep neural networks.We learn the embeddings not only for users but also for locations and time under the supervision of historical data,and then use Bayesian personalized ranking(BPR)to learn to rank destinations.Experimental results on the Zebra dataset show the effectiveness of DP-BPR.展开更多
An impact point prediction(IPP) guidance based on supervised learning is proposed to address the problem of precise guidance for the ballistic missile in high maneuver penetration condition.An accurate ballistic traje...An impact point prediction(IPP) guidance based on supervised learning is proposed to address the problem of precise guidance for the ballistic missile in high maneuver penetration condition.An accurate ballistic trajectory model is applied to generate training samples,and ablation experiments are conducted to determine the mapping relationship between the flight state and the impact point.At the same time,the impact point coordinates are decoupled to improve the prediction accuracy,and the sigmoid activation function is improved to ameliorate the prediction efficiency.Therefore,an IPP neural network model,which solves the contradiction between the accuracy and the speed of the IPP,is established.In view of the performance deviation of the divert control system,the mapping relationship between the guidance parameters and the impact deviation is analysed based on the variational principle.In addition,a fast iterative model of guidance parameters is designed for reference to the Newton iteration method,which solves the nonlinear strong coupling problem of the guidance parameter solution.Monte Carlo simulation results show that the prediction accuracy of the impact point is high,with a 3 σ prediction error of 4.5 m,and the guidance method is robust,with a 3 σ error of 7.5 m.On the STM32F407 singlechip microcomputer,a single IPP takes about 2.374 ms,and a single guidance solution takes about9.936 ms,which has a good real-time performance and a certain engineering application value.展开更多
To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introdu...To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introduced. BEL mimics the emotional learning mechanism in brain which has the superior features of fast learning and quick reacting. To further improve the performance of BEL in data analysis, genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in BEL neural network. The integrated algorithm named GA-BEL combines the advantages of the fast learning of BEL, and the global optimum solution of GA. GA-BEL has been tested on a real-world chaotic time series of geomagnetic activity index for prediction, eight benchmark datasets of university California at Irvine (UCI) and a functional magnetic resonance imaging (fMRI) dataset for classifications. The comparisons of experimental results have shown that the proposed GA-BEL algorithm is more accurate than the original BEL in prediction, and more effective when dealing with large-scale classification problems. Further, it outperforms most other traditional algorithms in terms of accuracy and execution speed in both prediction and classification applications.展开更多
The increased demand for superior materials has highlighted the need of investigating the mechanical properties of composites to achieve enhanced constitutive relationships.Fiber-reinforced polymer composites have eme...The increased demand for superior materials has highlighted the need of investigating the mechanical properties of composites to achieve enhanced constitutive relationships.Fiber-reinforced polymer composites have emerged as an integral part of materials development with tailored mechanical properties.However,the complexity and heterogeneity of such composites make it considerably more challenging to have precise quantification of properties and attain an optimal design of structures through experimental and computational approaches.In order to avoid the complex,cumbersome,and labor-intensive experimental and numerical modeling approaches,a machine learning(ML)model is proposed here such that it takes the microstructural image as input with a different range of Young’s modulus of carbon fibers and neat epoxy,and obtains output as visualization of the stress component S11(principal stress in the x-direction).For obtaining the training data of the ML model,a short carbon fiberfilled specimen under quasi-static tension is modeled based on 2D Representative Area Element(RAE)using finite element analysis.The composite is inclusive of short carbon fibers with an aspect ratio of 7.5that are infilled in the epoxy systems at various random orientations and positions generated using the Simple Sequential Inhibition(SSI)process.The study reveals that the pix2pix deep learning Convolutional Neural Network(CNN)model is robust enough to predict the stress fields in the composite for a given arrangement of short fibers filled in epoxy over the specified range of Young’s modulus with high accuracy.The CNN model achieves a correlation score of about 0.999 and L2 norm of less than 0.005 for a majority of the samples in the design spectrum,indicating excellent prediction capability.In this paper,we have focused on the stage-wise chronological development of the CNN model with optimized performance for predicting the full-field stress maps of the fiber-reinforced composite specimens.The development of such a robust and efficient algorithm would significantly reduce the amount of time and cost required to study and design new composite materials through the elimination of numerical inputs by direct microstructural images.展开更多
A new parallel architecture for quantified boolean formula(QBF)solving was proposed,and the prediction model based on machine learning technology was proposed for how sharing knowledge affects the solving performance ...A new parallel architecture for quantified boolean formula(QBF)solving was proposed,and the prediction model based on machine learning technology was proposed for how sharing knowledge affects the solving performance in QBF parallel solving system,and the experimental evaluation scheme was also designed.It shows that the characterization factor of clause and cube influence the solving performance markedly in our experiment.At the same time,the heuristic machine learning algorithm was applied,support vector machine was chosen to predict the performance of QBF parallel solving system based on clause sharing and cube sharing.The relative error of accuracy for prediction can be controlled in a reasonable range of 20%30%.The results show the important and complex role that knowledge sharing plays in any modern parallel solver.It shows that the parallel solver with machine learning reduces the quantity of knowledge sharing about 30%and saving computational resource but does not reduce the performance of solving system.展开更多
基金supported by the National Natural Science Foundation of China(12072090).
文摘This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller.
基金supported by Poongsan-KAIST Future Research Center Projectthe fund support provided by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(Grant No.2023R1A2C2005661)。
文摘This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.
文摘[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management.
基金Project(2013CB036004)supported by the National Basic Research Program of ChinaProject(51378510)supported by the National Natural Science Foundation of China
文摘Rock burst is a kind of geological disaster in rock excavation of high stress areas.To evaluate intensity of rock burst,the maximum shear stress,uniaxial compressive strength,uniaxial tensile strength and rock elastic energy index were selected as input factors,and burst pit depth as output factor.The rock burst prediction model was proposed according to the genetic algorithms and extreme learning machine.The effect of structural surface was taken into consideration.Based on the engineering examples of tunnels,the observed and collected data were divided into the training set,validation set and prediction set.The training set and validation set were used to train and optimize the model.Parameter optimization results are presented.The hidden layer node was450,and the fitness of the predictions was 0.0197 under the optimal combination of the input weight and offset vector.Then,the optimized model is tested with the prediction set.Results show that the proposed model is effective.The maximum relative error is4.71%,and the average relative error is 3.20%,which proves that the model has practical value in the relative engineering.
基金the support of the Fundamental Research Funds for the Air Force Engineering University under Grant No.XZJK2019040。
文摘Target maneuver trajectory prediction is an important prerequisite for air combat situation awareness and maneuver decision-making.However,how to use a large amount of trajectory data generated by air combat confrontation training to achieve real-time and accurate prediction of target maneuver trajectory is an urgent problem to be solved.To solve this problem,in this paper,a hybrid algorithm based on transfer learning,online learning,ensemble learning,regularization technology,target maneuvering segmentation point recognition algorithm,and Volterra series,abbreviated as AERTrOS-Volterra is proposed.Firstly,the model makes full use of a large number of trajectory sample data generated by air combat confrontation training,and constructs a Tr-Volterra algorithm framework suitable for air combat target maneuver trajectory prediction,which realizes the extraction of effective information from the historical trajectory data.Secondly,in order to improve the real-time online prediction accuracy and robustness of the prediction model in complex electromagnetic environments,on the basis of the TrVolterra algorithm framework,a robust regularized online Sequential Volterra prediction model is proposed by integrating online learning method,regularization technology and inverse weighting calculation method based on the priori error.Finally,inspired by the preferable performance of models ensemble,ensemble learning scheme is also incorporated into our proposed algorithm,which adaptively updates the ensemble prediction model according to the performance of the model on real-time samples and the recognition results of target maneuvering segmentation points,including the adaptation of model weights;adaptation of parameters;and dynamic inclusion and removal of models.Compared with many existing time series prediction methods,the newly proposed target maneuver trajectory prediction algorithm can fully mine the prior knowledge contained in the historical data to assist the current prediction.The rationality and effectiveness of the proposed algorithm are verified by simulation on three sets of chaotic time series data sets and a set of real target maneuver trajectory data sets.
基金Project(61472026)supported by the National Natural Science Foundation of ChinaProject(2014J410081)supported by Guangzhou Scientific Research Program,China
文摘When detecting deletions in complex human genomes,split-read approaches using short reads generated with next-generation sequencing still face the challenge that either false discovery rate is high,or sensitivity is low.To address the problem,an integrated strategy is proposed.It organically combines the fundamental theories of the three mainstream methods(read-pair approaches,split-read technologies and read-depth analysis) with modern machine learning algorithms,using the recipe of feature extraction as a bridge.Compared with the state-of-art split-read methods for deletion detection in both low and high sequence coverage,the machine-learning-aided strategy shows great ability in intelligently balancing sensitivity and false discovery rate and getting a both more sensitive and more precise call set at single-base-pair resolution.Thus,users do not need to rely on former experience to make an unnecessary trade-off beforehand and adjust parameters over and over again any more.It should be noted that modern machine learning models can play an important role in the field of structural variation prediction.
文摘Ship motions induced by waves have a significant impact on the efficiency and safety of offshore operations.Real-time prediction of ship motions in the next few seconds plays a crucial role in performing sensitive activities.However,the obvious memory effect of ship motion time series brings certain difficulty to rapid and accurate prediction.Therefore,a real-time framework based on the Long-Short Term Memory(LSTM)neural network model is proposed to predict ship motions in regular and irregular head waves.A 15000 TEU container ship model is employed to illustrate the proposed framework.The numerical implementation and the real-time ship motion prediction in irregular head waves corresponding to the different time scales are carried out based on the container ship model.The related experimental data were employed to verify the numerical simulation results.The results show that the proposed method is more robust than the classical extreme short-term prediction method based on potential flow theory in the prediction of nonlinear ship motions.
基金Projects(61673205,21727818,61503180)supported by the National Natural Science Foundation of ChinaProject(2017YFB0307304)supported by National Key R&D Program of ChinaProject(BK20141461)supported by the Natural Science Foundation of Jiangsu Province,China
文摘Batch to batch temperature control of a semi-batch chemical reactor with heating/cooling system was discussed in this study. Without extensive modeling investigations, a two-dimensional(2D) general predictive iterative learning control(2D-MGPILC) strategy based on the multi-model with time-varying weights was introduced for optimizing the tracking performance of desired temperature profile. This strategy was modeled based on an iterative learning control(ILC) algorithm for a 2D system and designed in the generalized predictive control(GPC) framework. Firstly, a multi-model structure with time-varying weights was developed to describe the complex operation of a general semi-batch reactor. Secondly, the 2 D-MGPILC algorithm was proposed to optimize simultaneously the dynamic performance along the time and batch axes. Finally, simulation for the controller design of a semi-batch reactor with multiple reactions was involved to demonstrate that the satisfactory performance could be achieved despite of the repetitive or non-repetitive disturbances.
基金Project(KF2029)supported by the State Key Laboratory of Automotive Safety and Energy(Tsinghua University),ChinaProject(102253)supported partially by the Innovate UK。
文摘This paper studied a supervisory control system for a hybrid off-highway electric vehicle under the chargesustaining(CS)condition.A new predictive double Q-learning with backup models(PDQL)scheme is proposed to optimize the engine fuel in real-world driving and improve energy efficiency with a faster and more robust learning process.Unlike the existing“model-free”methods,which solely follow on-policy and off-policy to update knowledge bases(Q-tables),the PDQL is developed with the capability to merge both on-policy and off-policy learning by introducing a backup model(Q-table).Experimental evaluations are conducted based on software-in-the-loop(SiL)and hardware-in-the-loop(HiL)test platforms based on real-time modelling of the studied vehicle.Compared to the standard double Q-learning(SDQL),the PDQL only needs half of the learning iterations to achieve better energy efficiency than the SDQL at the end learning process.In the SiL under 35 rounds of learning,the results show that the PDQL can improve the vehicle energy efficiency by 1.75%higher than SDQL.By implementing the PDQL in HiL under four predefined real-world conditions,the PDQL can robustly save more than 5.03%energy than the SDQL scheme.
基金supported by National Natural Science Foundation of China(Grant No.12432018,12372346)the Innovative Research Groups of the National Natural Science Foundation of China(Grant No.12221002).
文摘A typical Whipple shield consists of double-layered plates with a certain gap.The space debris impacts the outer plate and is broken into a debris cloud(shattered,molten,vaporized)with dispersed energy and momentum,which reduces the risk of penetrating the bulkhead.In the realm of hypervelocity impact,strain rate(>10^(5)s^(-1))effects are negligible,and fluid dynamics is employed to describe the impact process.Efficient numerical tools for precisely predicting the damage degree can greatly accelerate the design and optimization of advanced protective structures.Current hypervelocity impact research primarily focuses on the interaction between projectile and front plate and the movement of debris cloud.However,the damage mechanism of debris cloud impacts on rear plates-the critical threat component-remains underexplored owing to complex multi-physics processes and prohibitive computational costs.Existing approaches,ranging from semi-empirical equations to a machine learningbased ballistic limit prediction method,are constrained to binary penetration classification.Alternatively,the uneven data from experiments and simulations caused these methods to be ineffective when the projectile has irregular shapes and complicate flight attitude.Therefore,it is urgent to develop a new damage prediction method for predicting the rear plate damage,which can help to gain a deeper understanding of the damage mechanism.In this study,a machine learning(ML)method is developed to predict the damage distribution in the rear plate.Based on the unit velocity space,the discretized information of debris cloud and rear plate damage from rare simulation cases is used as input data for training the ML models,while the generalization ability for damage distribution prediction is tested by other simulation cases with different attack angles.The results demonstrate that the training and prediction accuracies using the Random Forest(RF)algorithm significantly surpass those using Artificial Neural Networks(ANNs)and Support Vector Machine(SVM).The RF-based model effectively identifies damage features in sparsely distributed debris cloud and cumulative effect.This study establishes an expandable new dataset that accommodates additional parameters to improve the prediction accuracy.Results demonstrate the model's ability to overcome data imbalance limitations through debris cloud features,enabling rapid and accurate rear plate damage prediction across wider scenarios with minimal data requirements.
基金Projects(41807259,51604109)supported by the National Natural Science Foundation of ChinaProject(2020CX040)supported by the Innovation-Driven Project of Central South University,ChinaProject(2018JJ3693)supported by the Natural Science Foundation of Hunan Province,China。
文摘Rockburst prediction is of vital significance to the design and construction of underground hard rock mines.A rockburst database consisting of 102 case histories,i.e.,1998−2011 period data from 14 hard rock mines was examined for rockburst prediction in burst-prone mines by three tree-based ensemble methods.The dataset was examined with six widely accepted indices which are:the maximum tangential stress around the excavation boundary(MTS),uniaxial compressive strength(UCS)and uniaxial tensile strength(UTS)of the intact rock,stress concentration factor(SCF),rock brittleness index(BI),and strain energy storage index(EEI).Two boosting(AdaBoost.M1,SAMME)and bagging algorithms with classification trees as baseline classifier on ability to learn rockburst were evaluated.The available dataset was randomly divided into training set(2/3 of whole datasets)and testing set(the remaining datasets).Repeated 10-fold cross validation(CV)was applied as the validation method for tuning the hyper-parameters.The margin analysis and the variable relative importance were employed to analyze some characteristics of the ensembles.According to 10-fold CV,the accuracy analysis of rockburst dataset demonstrated that the best prediction method for the potential of rockburst is bagging when compared to AdaBoost.M1,SAMME algorithms and empirical criteria methods.
基金Projects(41772348,42072326)supported by the National Natural Science Foundation of ChinaProject(2017YFC0601503)supported by the National Key Research and Development Program,China。
文摘Based on the complex correlation between the geochemical element distribution patterns at the surface and the types of bedrock and the powerful capabilities in capturing subtle of machine learning algorithms,four machine learning algorithms,namely,decision tree(DT),random forest(RF),XGBoost(XGB),and LightGBM(LGBM),were implemented for the lithostratigraphic classification and lithostratigraphic prediction of a quaternary coverage area based on stream sediment geochemical sampling data in the Chahanwusu River of Dulan County,Qinghai Province,China.The local Moran’s I to represent the features of spatial autocorrelations,and terrain factors to represent the features of surface geological processes,were calculated as additional features.The accuracy,precision,recall,and F1 scores were chosen as the evaluation indices and Voronoi diagrams were applied for visualization.The results indicate that XGB and LGBM models both performed well.They not only obtained relatively satisfactory classification performance but also predicted lithostratigraphic types of the Quaternary coverage area that are essentially consistent with their neighborhoods which have the known types.It is feasible to classify the lithostratigraphic types through the concentrations of geochemical elements in the sediments,and the XGB and LGBM algorithms are recommended for lithostratigraphic classification.
基金Project (SGKJ[200301-16]) supported by the State Grid Cooperation of China
文摘Deficiencies of applying the traditional least squares support vector machine (LS-SVM) to time series online prediction were specified. According to the kernel function matrix's property and using the recursive calculation of block matrix, a new time series online prediction algorithm based on improved LS-SVM was proposed. The historical training results were fully utilized and the computing speed of LS-SVM was enhanced. Then, the improved algorithm was applied to timc series online prediction. Based on the operational data provided by the Northwest Power Grid of China, the method was used in the transient stability prediction of electric power system. The results show that, compared with the calculation time of the traditional LS-SVM(75 1 600 ms), that of the proposed method in different time windows is 40-60 ms, proposed method is above 0.8. So the improved method is online prediction. and the prediction accuracy(normalized root mean squared error) of the better than the traditional LS-SVM and more suitable for time series online prediction.
基金Project(2016YFB0100906)supported by the National Key R&D Program in ChinaProject(2014BAG03B01)supported by the National Science and Technology Support plan Project China+1 种基金Project(61673232)supported by the National Natural Science Foundation of ChinaProjects(Dl S11090028000,D171100006417003)supported by Beijing Municipal Science and Technology Program,China
文摘Short-term travel flow prediction has been the core of the intelligent transport systems(ITS). An advanced method based on fuzzy C-means(FCM) and extreme learning machine(ELM) has been discussed by analyzing prediction model. First, this model takes advantages of ability to adapt to nonlinear systems and the fast speed of ELM algorithm. Second, with FCM-clustering function, this novel model can get the clusters and the membership in the same cluster, which means that the associated observation points have been chosen. Therefore, the spatial relations can be used by giving the weight to every observation points when the model trains and tests the ELM. Third, by analyzing the actual data in Haining City in 2016, the feasibility and advantages of FCM-ELM prediction model have been shown when compared with other prediction algorithms.
基金Project(2018YFF0214706)supported by the National Key Research and Development Program of ChinaProject(cstc2020jcyj-msxmX0690)supported by the Natural Science Foundation of Chongqing,China+1 种基金Project(2020CDJ-LHZZ-039)supported by the Fundamental Research Funds for the Central Universities of Chongqing,ChinaProject(cstc2019jscx-fxydX0012)supported by the Key Research Program of Chongqing Technology Innovation and Application Development,China。
文摘Destination prediction has attracted widespread attention because it can help vehicle-aid systems recommend related services in advance to improve user driving experience.However,the relevant research is mainly based on driving trajectory of vehicles to predict the destinations,which is challenging to achieve the early destination prediction.To this end,we propose a model of early destination prediction,DP-BPR,to predict the destinations by users’travel time and locations.There are three challenges to accomplish the model:1)the extremely sparse historical data make it challenge to predict destinations directly from raw historical data;2)the destinations are related to not only departure points but also departure time so that both of them should be taken into consideration in prediction;3)how to learn destination preferences from historical data.To deal with these challenges,we map sparse high-dimensional data to a dense low-dimensional space through embedding learning using deep neural networks.We learn the embeddings not only for users but also for locations and time under the supervision of historical data,and then use Bayesian personalized ranking(BPR)to learn to rank destinations.Experimental results on the Zebra dataset show the effectiveness of DP-BPR.
基金supported by the National Natural Science Foundation of China (Grant No.62103432)supported by Young Talent fund of University Association for Science and Technology in Shaanxi, China(Grant No.20210108)。
文摘An impact point prediction(IPP) guidance based on supervised learning is proposed to address the problem of precise guidance for the ballistic missile in high maneuver penetration condition.An accurate ballistic trajectory model is applied to generate training samples,and ablation experiments are conducted to determine the mapping relationship between the flight state and the impact point.At the same time,the impact point coordinates are decoupled to improve the prediction accuracy,and the sigmoid activation function is improved to ameliorate the prediction efficiency.Therefore,an IPP neural network model,which solves the contradiction between the accuracy and the speed of the IPP,is established.In view of the performance deviation of the divert control system,the mapping relationship between the guidance parameters and the impact deviation is analysed based on the variational principle.In addition,a fast iterative model of guidance parameters is designed for reference to the Newton iteration method,which solves the nonlinear strong coupling problem of the guidance parameter solution.Monte Carlo simulation results show that the prediction accuracy of the impact point is high,with a 3 σ prediction error of 4.5 m,and the guidance method is robust,with a 3 σ error of 7.5 m.On the STM32F407 singlechip microcomputer,a single IPP takes about 2.374 ms,and a single guidance solution takes about9.936 ms,which has a good real-time performance and a certain engineering application value.
基金Project(61403422)supported by the National Natural Science Foundation of ChinaProject(17C1084)supported by Hunan Education Department Science Foundation of ChinaProject(17ZD02)supported by Hunan University of Arts and Science,China
文摘To overcome the deficiencies of high computational complexity and low convergence speed in traditional neural networks, a novel bio-inspired machine learning algorithm named brain emotional learning (BEL) is introduced. BEL mimics the emotional learning mechanism in brain which has the superior features of fast learning and quick reacting. To further improve the performance of BEL in data analysis, genetic algorithm (GA) is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in BEL neural network. The integrated algorithm named GA-BEL combines the advantages of the fast learning of BEL, and the global optimum solution of GA. GA-BEL has been tested on a real-world chaotic time series of geomagnetic activity index for prediction, eight benchmark datasets of university California at Irvine (UCI) and a functional magnetic resonance imaging (fMRI) dataset for classifications. The comparisons of experimental results have shown that the proposed GA-BEL algorithm is more accurate than the original BEL in prediction, and more effective when dealing with large-scale classification problems. Further, it outperforms most other traditional algorithms in terms of accuracy and execution speed in both prediction and classification applications.
基金financial support received from DST-SERBSRG/2020/000997,Indiathe initiation grant received from IIT Kanpur。
文摘The increased demand for superior materials has highlighted the need of investigating the mechanical properties of composites to achieve enhanced constitutive relationships.Fiber-reinforced polymer composites have emerged as an integral part of materials development with tailored mechanical properties.However,the complexity and heterogeneity of such composites make it considerably more challenging to have precise quantification of properties and attain an optimal design of structures through experimental and computational approaches.In order to avoid the complex,cumbersome,and labor-intensive experimental and numerical modeling approaches,a machine learning(ML)model is proposed here such that it takes the microstructural image as input with a different range of Young’s modulus of carbon fibers and neat epoxy,and obtains output as visualization of the stress component S11(principal stress in the x-direction).For obtaining the training data of the ML model,a short carbon fiberfilled specimen under quasi-static tension is modeled based on 2D Representative Area Element(RAE)using finite element analysis.The composite is inclusive of short carbon fibers with an aspect ratio of 7.5that are infilled in the epoxy systems at various random orientations and positions generated using the Simple Sequential Inhibition(SSI)process.The study reveals that the pix2pix deep learning Convolutional Neural Network(CNN)model is robust enough to predict the stress fields in the composite for a given arrangement of short fibers filled in epoxy over the specified range of Young’s modulus with high accuracy.The CNN model achieves a correlation score of about 0.999 and L2 norm of less than 0.005 for a majority of the samples in the design spectrum,indicating excellent prediction capability.In this paper,we have focused on the stage-wise chronological development of the CNN model with optimized performance for predicting the full-field stress maps of the fiber-reinforced composite specimens.The development of such a robust and efficient algorithm would significantly reduce the amount of time and cost required to study and design new composite materials through the elimination of numerical inputs by direct microstructural images.
基金Project(61171141)supported by the National Natural Science Foundation of China
文摘A new parallel architecture for quantified boolean formula(QBF)solving was proposed,and the prediction model based on machine learning technology was proposed for how sharing knowledge affects the solving performance in QBF parallel solving system,and the experimental evaluation scheme was also designed.It shows that the characterization factor of clause and cube influence the solving performance markedly in our experiment.At the same time,the heuristic machine learning algorithm was applied,support vector machine was chosen to predict the performance of QBF parallel solving system based on clause sharing and cube sharing.The relative error of accuracy for prediction can be controlled in a reasonable range of 20%30%.The results show the important and complex role that knowledge sharing plays in any modern parallel solver.It shows that the parallel solver with machine learning reduces the quantity of knowledge sharing about 30%and saving computational resource but does not reduce the performance of solving system.