The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to ...The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI.展开更多
In the last decade,space solar power satellites(SSPSs)have been conceived to support net-zero carbon emissions and have attracted considerable attention.Electric energy is transmitted to the ground via a microwave pow...In the last decade,space solar power satellites(SSPSs)have been conceived to support net-zero carbon emissions and have attracted considerable attention.Electric energy is transmitted to the ground via a microwave power beam,a technology known as microwave power transmission(MPT).Due to the vast transmission distance of tens of thousands of kilometers,the power transmitting antenna array must span up to 1 kilometer in diameter.At the same time,the size of the rectifying array on the ground should extend over a few kilometers.This makes the MPT system of SSPSs significantly larger than the existing aerospace engineering system.To design and operate a rational MPT system,comprehensive optimization is required.Taking the space MPT system engineering into consideration,a novel multi-objective optimization function is proposed and further analyzed.The multi-objective optimization problem is modeled mathematically.Beam collection efficiency(BCE)is the primary factor,followed by the thermal management capability.Some tapers,designed to solve the conflict between BCE and the thermal problem,are reviewed.In addition to these two factors,rectenna design complexity is included as a functional factor in the optimization objective.Weight coefficients are assigned to these factors to prioritize them.Radiating planar arrays with different aperture illumination fields are studied,and their performances are compared using the multi-objective optimization function.Transmitting array size,rectifying array size,transmission distance,and transmitted power remaine constant in various cases,ensuring fair comparisons.The analysis results show that the proposed optimization function is effective in optimizing and selecting the MPT system architecture.It is also noted that the multi-objective optimization function can be expanded to include other factors in the future.展开更多
The rapid integration of Internet of Things(IoT)technologies is reshaping the global energy landscape by deploying smart meters that enable high-resolution consumption monitoring,two-way communication,and advanced met...The rapid integration of Internet of Things(IoT)technologies is reshaping the global energy landscape by deploying smart meters that enable high-resolution consumption monitoring,two-way communication,and advanced metering infrastructure services.However,this digital transformation also exposes power system to evolving threats,ranging from cyber intrusions and electricity theft to device malfunctions,and the unpredictable nature of these anomalies,coupled with the scarcity of labeled fault data,makes realtime detection exceptionally challenging.To address these difficulties,a real-time decision support framework is presented for smart meter anomality detection that leverages rolling time windows and two self-supervised contrastive learning modules.The first module synthesizes diverse negative samples to overcome the lack of labeled anomalies,while the second captures intrinsic temporal patterns for enhanced contextual discrimination.The end-to-end framework continuously updates its model with rolling updated meter data to deliver timely identification of emerging abnormal behaviors in evolving grids.Extensive evaluations on eight publicly available smart meter datasets over seven diverse abnormal patterns testing demonstrate the effectiveness of the proposed full framework,achieving average recall and F1 score of more than 0.85.展开更多
This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hype...This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller.展开更多
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa...Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.展开更多
The lack of systematic and scientific top-level arrangement in the field of civil aircraft flight test leads to the problems of long duration and high cost.Based on the flight test activity,mathematical models of flig...The lack of systematic and scientific top-level arrangement in the field of civil aircraft flight test leads to the problems of long duration and high cost.Based on the flight test activity,mathematical models of flight test duration and cost are established to set up the framework of flight test process.The top-level arrangement for flight test is optimized by multi-objective algorithm to reduce the duration and cost of flight test.In order to verify the necessity and validity of the mathematical models and the optimization algorithm of top-level arrangement,real flight test data is used to make an example calculation.Results show that the multi-objective optimization results of the top-level flight arrangement are better than the initial arrangement data,which can shorten the duration,reduce the cost,and improve the efficiency of flight test.展开更多
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ...Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.展开更多
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t...Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.展开更多
To address the issue of neglecting scenarios involving joint operations and collaborative drone swarm operations in air combat target intent recognition.This paper proposes a transfer learning-based intention predicti...To address the issue of neglecting scenarios involving joint operations and collaborative drone swarm operations in air combat target intent recognition.This paper proposes a transfer learning-based intention prediction model for drone formation targets in air combat.This model recognizes the intentions of multiple aerial targets by extracting spatial features among the targets at each moment.Simulation results demonstrate that,compared to classical intention recognition models,the proposed model in this paper achieves higher accuracy in identifying the intentions of drone swarm targets in air combat scenarios.展开更多
3-Nitro-1,2,4-triazol-5-one(NTO)is a typical high-energy,low-sensitivity explosive,and accurate concentration monitoring is critical for crystallization process control.In this study,a high-precision quantitative anal...3-Nitro-1,2,4-triazol-5-one(NTO)is a typical high-energy,low-sensitivity explosive,and accurate concentration monitoring is critical for crystallization process control.In this study,a high-precision quantitative analytical model for NTO concentration in ethanol solutions was developed by integrating real-time ATR-FTIR spectroscopy with chemometric and machine learning techniques.Dynamic spectral data were obtained by designing multi-concentration gradient heating-cooling cycle experiments,abnormal samples were eliminated using the isolation forest algorithm,and the effects of various preprocessing methods on model performance were systematically evaluated.The results show that partial least squares regression(PLSR)exhibits superior generalization ability compared to other models.Vibrational bands corresponding to C=O and–NO_(2)were identified as key predictors for concentration estimation.This work provides an efficient and reliable solution for real-time concentration monitoring during NTO crystallization and holds significant potential for process analytical applications in energetic material manufacturing.展开更多
The dwell scheduling problem for a multifunctional radar system is led to the formation of corresponding optimiza-tion problem.In order to solve the resulting optimization prob-lem,the dwell scheduling process in a sc...The dwell scheduling problem for a multifunctional radar system is led to the formation of corresponding optimiza-tion problem.In order to solve the resulting optimization prob-lem,the dwell scheduling process in a scheduling interval(SI)is formulated as a Markov decision process(MDP),where the state,action,and reward are specified for this dwell scheduling problem.Specially,the action is defined as scheduling the task on the left side,right side or in the middle of the radar idle time-line,which reduces the action space effectively and accelerates the convergence of the training.Through the above process,a model-free reinforcement learning framework is established.Then,an adaptive dwell scheduling method based on Q-learn-ing is proposed,where the converged Q value table after train-ing is utilized to instruct the scheduling process.Simulation results demonstrate that compared with existing dwell schedul-ing algorithms,the proposed one can achieve better scheduling performance considering the urgency criterion,the importance criterion and the desired execution time criterion comprehen-sively.The average running time shows the proposed algorithm has real-time performance.展开更多
Background The geo-traceability of cotton is crucial for ensuring the quality and integrity of cotton brands. However, effective methods for achieving this traceability are currently lacking. This study investigates t...Background The geo-traceability of cotton is crucial for ensuring the quality and integrity of cotton brands. However, effective methods for achieving this traceability are currently lacking. This study investigates the potential of explainable machine learning for the geo-traceability of raw cotton.Results The findings indicate that principal component analysis(PCA) exhibits limited effectiveness in tracing cotton origins. In contrast, partial least squares discriminant analysis(PLS-DA) demonstrates superior classification performance, identifying seven discriminating variables: Na, Mn, Ba, Rb, Al, As, and Pb. The use of decision tree(DT), support vector machine(SVM), and random forest(RF) models for origin discrimination yielded accuracies of 90%, 87%, and 97%, respectively. Notably, the light gradient boosting machine(Light GBM) model achieved perfect performance metrics, with accuracy, precision, and recall rate all reaching 100% on the test set. The output of the Light GBM model was further evaluated using the SHapley Additive ex Planation(SHAP) technique, which highlighted differences in the elemental composition of raw cotton from various countries. Specifically, the elements Pb, Ni, Na, Al, As, Ba, and Rb significantly influenced the model's predictions.Conclusion These findings suggest that explainable machine learning techniques can provide insights into the complex relationships between geographic information and raw cotton. Consequently, these methodologies enhances the precision and reliability of geographic traceability for raw cotton.展开更多
The belief rule-based(BRB)system has been popular in complexity system modeling due to its good interpretability.However,the current mainstream optimization methods of the BRB systems only focus on modeling accuracy b...The belief rule-based(BRB)system has been popular in complexity system modeling due to its good interpretability.However,the current mainstream optimization methods of the BRB systems only focus on modeling accuracy but ignore the interpretability.The single-objective optimization strategy has been applied in the interpretability-accuracy trade-off by inte-grating accuracy and interpretability into an optimization objec-tive.But the integration has a greater impact on optimization results with strong subjectivity.Thus,a multi-objective optimiza-tion framework in the modeling of BRB systems with inter-pretability-accuracy trade-off is proposed in this paper.Firstly,complexity and accuracy are taken as two independent opti-mization goals,and uniformity as a constraint to give the mathe-matical description.Secondly,a classical multi-objective opti-mization algorithm,nondominated sorting genetic algorithm II(NSGA-II),is utilized as an optimization tool to give a set of BRB systems with different accuracy and complexity.Finally,a pipeline leakage detection case is studied to verify the feasibility and effectiveness of the developed multi-objective optimization.The comparison illustrates that the proposed multi-objective optimization framework can effectively avoid the subjectivity of single-objective optimization,and has capability of joint optimiz-ing the structure and parameters of BRB systems with inter-pretability-accuracy trade-off.展开更多
Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is inf...Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency.展开更多
Blast-induced ground vibration,quantified by peak particle velocity(PPV),is a crucial factor in mitigating environmental and structural risks in mining and geotechnical engineering.Accurate PPV prediction facilitates ...Blast-induced ground vibration,quantified by peak particle velocity(PPV),is a crucial factor in mitigating environmental and structural risks in mining and geotechnical engineering.Accurate PPV prediction facilitates safer and more sustainable blasting operations by minimizing adverse impacts and ensuring regulatory compliance.This study presents an advanced predictive framework integrating Cat Boost(CB)with nature-inspired optimization algorithms,including the Bat Algorithm(BAT),Sparrow Search Algorithm(SSA),Butterfly Optimization Algorithm(BOA),and Grasshopper Optimization Algorithm(GOA).A comprehensive dataset from the Sarcheshmeh Copper Mine in Iran was utilized to develop and evaluate these models using key performance metrics such as the Index of Agreement(IoA),Nash-Sutcliffe Efficiency(NSE),and the coefficient of determination(R^(2)).The hybrid CB-BOA model outperformed other approaches,achieving the highest accuracy(R^(2)=0.989)and the lowest prediction errors.SHAP analysis identified Distance(Di)as the most influential variable affecting PPV,while uncertainty analysis confirmed CB-BOA as the most reliable model,featuring the narrowest prediction interval.These findings highlight the effectiveness of hybrid machine learning models in refining PPV predictions,contributing to improved blast design strategies,enhanced structural safety,and reduced environmental impacts in mining and geotechnical engineering.展开更多
Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological...Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
基金supported by the Guangdong Major Project of Basic and Applied Basic Research(Grant No.2021B0301030001)the National Key Research and Development Program of China(Grant No.2021YFB3802300)the Foundation of National Key Laboratory of Shock Wave and Detonation Physics(Grant No.JCKYS2022212004)。
文摘The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI.
文摘In the last decade,space solar power satellites(SSPSs)have been conceived to support net-zero carbon emissions and have attracted considerable attention.Electric energy is transmitted to the ground via a microwave power beam,a technology known as microwave power transmission(MPT).Due to the vast transmission distance of tens of thousands of kilometers,the power transmitting antenna array must span up to 1 kilometer in diameter.At the same time,the size of the rectifying array on the ground should extend over a few kilometers.This makes the MPT system of SSPSs significantly larger than the existing aerospace engineering system.To design and operate a rational MPT system,comprehensive optimization is required.Taking the space MPT system engineering into consideration,a novel multi-objective optimization function is proposed and further analyzed.The multi-objective optimization problem is modeled mathematically.Beam collection efficiency(BCE)is the primary factor,followed by the thermal management capability.Some tapers,designed to solve the conflict between BCE and the thermal problem,are reviewed.In addition to these two factors,rectenna design complexity is included as a functional factor in the optimization objective.Weight coefficients are assigned to these factors to prioritize them.Radiating planar arrays with different aperture illumination fields are studied,and their performances are compared using the multi-objective optimization function.Transmitting array size,rectifying array size,transmission distance,and transmitted power remaine constant in various cases,ensuring fair comparisons.The analysis results show that the proposed optimization function is effective in optimizing and selecting the MPT system architecture.It is also noted that the multi-objective optimization function can be expanded to include other factors in the future.
文摘The rapid integration of Internet of Things(IoT)technologies is reshaping the global energy landscape by deploying smart meters that enable high-resolution consumption monitoring,two-way communication,and advanced metering infrastructure services.However,this digital transformation also exposes power system to evolving threats,ranging from cyber intrusions and electricity theft to device malfunctions,and the unpredictable nature of these anomalies,coupled with the scarcity of labeled fault data,makes realtime detection exceptionally challenging.To address these difficulties,a real-time decision support framework is presented for smart meter anomality detection that leverages rolling time windows and two self-supervised contrastive learning modules.The first module synthesizes diverse negative samples to overcome the lack of labeled anomalies,while the second captures intrinsic temporal patterns for enhanced contextual discrimination.The end-to-end framework continuously updates its model with rolling updated meter data to deliver timely identification of emerging abnormal behaviors in evolving grids.Extensive evaluations on eight publicly available smart meter datasets over seven diverse abnormal patterns testing demonstrate the effectiveness of the proposed full framework,achieving average recall and F1 score of more than 0.85.
基金supported by the National Natural Science Foundation of China(12072090).
文摘This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller.
基金National Key Research and Development Program(2021YFB2900604)。
文摘Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.
基金supported by the National Natural Science Foundation of China(62073267,61903305)the Fundamental Research Funds for the Central Universities(HXGJXM202214).
文摘The lack of systematic and scientific top-level arrangement in the field of civil aircraft flight test leads to the problems of long duration and high cost.Based on the flight test activity,mathematical models of flight test duration and cost are established to set up the framework of flight test process.The top-level arrangement for flight test is optimized by multi-objective algorithm to reduce the duration and cost of flight test.In order to verify the necessity and validity of the mathematical models and the optimization algorithm of top-level arrangement,real flight test data is used to make an example calculation.Results show that the multi-objective optimization results of the top-level flight arrangement are better than the initial arrangement data,which can shorten the duration,reduce the cost,and improve the efficiency of flight test.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(Grant No.2022D01B 187)。
文摘Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(No.2022D01B187).
文摘Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.
文摘To address the issue of neglecting scenarios involving joint operations and collaborative drone swarm operations in air combat target intent recognition.This paper proposes a transfer learning-based intention prediction model for drone formation targets in air combat.This model recognizes the intentions of multiple aerial targets by extracting spatial features among the targets at each moment.Simulation results demonstrate that,compared to classical intention recognition models,the proposed model in this paper achieves higher accuracy in identifying the intentions of drone swarm targets in air combat scenarios.
基金supported by the Aeronautical Science Foundation of China(Grant No.20230018072011)。
文摘3-Nitro-1,2,4-triazol-5-one(NTO)is a typical high-energy,low-sensitivity explosive,and accurate concentration monitoring is critical for crystallization process control.In this study,a high-precision quantitative analytical model for NTO concentration in ethanol solutions was developed by integrating real-time ATR-FTIR spectroscopy with chemometric and machine learning techniques.Dynamic spectral data were obtained by designing multi-concentration gradient heating-cooling cycle experiments,abnormal samples were eliminated using the isolation forest algorithm,and the effects of various preprocessing methods on model performance were systematically evaluated.The results show that partial least squares regression(PLSR)exhibits superior generalization ability compared to other models.Vibrational bands corresponding to C=O and–NO_(2)were identified as key predictors for concentration estimation.This work provides an efficient and reliable solution for real-time concentration monitoring during NTO crystallization and holds significant potential for process analytical applications in energetic material manufacturing.
基金supported by the National Natural Science Foundation of China(6177109562031007).
文摘The dwell scheduling problem for a multifunctional radar system is led to the formation of corresponding optimiza-tion problem.In order to solve the resulting optimization prob-lem,the dwell scheduling process in a scheduling interval(SI)is formulated as a Markov decision process(MDP),where the state,action,and reward are specified for this dwell scheduling problem.Specially,the action is defined as scheduling the task on the left side,right side or in the middle of the radar idle time-line,which reduces the action space effectively and accelerates the convergence of the training.Through the above process,a model-free reinforcement learning framework is established.Then,an adaptive dwell scheduling method based on Q-learn-ing is proposed,where the converged Q value table after train-ing is utilized to instruct the scheduling process.Simulation results demonstrate that compared with existing dwell schedul-ing algorithms,the proposed one can achieve better scheduling performance considering the urgency criterion,the importance criterion and the desired execution time criterion comprehen-sively.The average running time shows the proposed algorithm has real-time performance.
基金supported by Agricultural Science and Technology Innovation Program of Chinese Academy of Agricultural Science。
文摘Background The geo-traceability of cotton is crucial for ensuring the quality and integrity of cotton brands. However, effective methods for achieving this traceability are currently lacking. This study investigates the potential of explainable machine learning for the geo-traceability of raw cotton.Results The findings indicate that principal component analysis(PCA) exhibits limited effectiveness in tracing cotton origins. In contrast, partial least squares discriminant analysis(PLS-DA) demonstrates superior classification performance, identifying seven discriminating variables: Na, Mn, Ba, Rb, Al, As, and Pb. The use of decision tree(DT), support vector machine(SVM), and random forest(RF) models for origin discrimination yielded accuracies of 90%, 87%, and 97%, respectively. Notably, the light gradient boosting machine(Light GBM) model achieved perfect performance metrics, with accuracy, precision, and recall rate all reaching 100% on the test set. The output of the Light GBM model was further evaluated using the SHapley Additive ex Planation(SHAP) technique, which highlighted differences in the elemental composition of raw cotton from various countries. Specifically, the elements Pb, Ni, Na, Al, As, Ba, and Rb significantly influenced the model's predictions.Conclusion These findings suggest that explainable machine learning techniques can provide insights into the complex relationships between geographic information and raw cotton. Consequently, these methodologies enhances the precision and reliability of geographic traceability for raw cotton.
基金supported by the National Natural Science Foundation of China(71901212)the Science and Technology Innovation Program of Hunan Province(2020RC4046).
文摘The belief rule-based(BRB)system has been popular in complexity system modeling due to its good interpretability.However,the current mainstream optimization methods of the BRB systems only focus on modeling accuracy but ignore the interpretability.The single-objective optimization strategy has been applied in the interpretability-accuracy trade-off by inte-grating accuracy and interpretability into an optimization objec-tive.But the integration has a greater impact on optimization results with strong subjectivity.Thus,a multi-objective optimiza-tion framework in the modeling of BRB systems with inter-pretability-accuracy trade-off is proposed in this paper.Firstly,complexity and accuracy are taken as two independent opti-mization goals,and uniformity as a constraint to give the mathe-matical description.Secondly,a classical multi-objective opti-mization algorithm,nondominated sorting genetic algorithm II(NSGA-II),is utilized as an optimization tool to give a set of BRB systems with different accuracy and complexity.Finally,a pipeline leakage detection case is studied to verify the feasibility and effectiveness of the developed multi-objective optimization.The comparison illustrates that the proposed multi-objective optimization framework can effectively avoid the subjectivity of single-objective optimization,and has capability of joint optimiz-ing the structure and parameters of BRB systems with inter-pretability-accuracy trade-off.
文摘Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency.
基金the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the project number"NBUFFMRA-2025-2461-09"。
文摘Blast-induced ground vibration,quantified by peak particle velocity(PPV),is a crucial factor in mitigating environmental and structural risks in mining and geotechnical engineering.Accurate PPV prediction facilitates safer and more sustainable blasting operations by minimizing adverse impacts and ensuring regulatory compliance.This study presents an advanced predictive framework integrating Cat Boost(CB)with nature-inspired optimization algorithms,including the Bat Algorithm(BAT),Sparrow Search Algorithm(SSA),Butterfly Optimization Algorithm(BOA),and Grasshopper Optimization Algorithm(GOA).A comprehensive dataset from the Sarcheshmeh Copper Mine in Iran was utilized to develop and evaluate these models using key performance metrics such as the Index of Agreement(IoA),Nash-Sutcliffe Efficiency(NSE),and the coefficient of determination(R^(2)).The hybrid CB-BOA model outperformed other approaches,achieving the highest accuracy(R^(2)=0.989)and the lowest prediction errors.SHAP analysis identified Distance(Di)as the most influential variable affecting PPV,while uncertainty analysis confirmed CB-BOA as the most reliable model,featuring the narrowest prediction interval.These findings highlight the effectiveness of hybrid machine learning models in refining PPV predictions,contributing to improved blast design strategies,enhanced structural safety,and reduced environmental impacts in mining and geotechnical engineering.
基金Project(2024JJ2074) supported by the Natural Science Foundation of Hunan Province,ChinaProject(22376221) supported by the National Natural Science Foundation of ChinaProject(2023QNRC001) supported by the Young Elite Scientists Sponsorship Program by CAST,China。
文摘Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.