This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hype...This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller.展开更多
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t...Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological...Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs.展开更多
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ...Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.展开更多
This study aimed to address the challenge of accurately and reliably detecting tomatoes in dense planting environments,a critical prerequisite for the automation implementation of robotic harvesting.However,the heavy ...This study aimed to address the challenge of accurately and reliably detecting tomatoes in dense planting environments,a critical prerequisite for the automation implementation of robotic harvesting.However,the heavy reliance on extensive manually annotated datasets for training deep learning models still poses significant limitations to their application in real-world agricultural production environments.To overcome these limitations,we employed domain adaptive learning approach combined with the YOLOv5 model to develop a novel tomato detection model called as TDA-YOLO(tomato detection domain adaptation).We designated the normal illumination scenes in dense planting environments as the source domain and utilized various other illumination scenes as the target domain.To construct bridge mechanism between source and target domains,neural preset for color style transfer is introduced to generate a pseudo-dataset,which served to deal with domain discrepancy.Furthermore,this study combines the semi-supervised learning method to enable the model to extract domain-invariant features more fully,and uses knowledge distillation to improve the model's ability to adapt to the target domain.Additionally,for purpose of promoting inference speed and low computational demand,the lightweight FasterNet network was integrated into the YOLOv5's C3 module,creating a modified C3_Faster module.The experimental results demonstrated that the proposed TDA-YOLO model significantly outperformed original YOLOv5s model,achieving a mAP(mean average precision)of 96.80%for tomato detection across diverse scenarios in dense planting environments,increasing by 7.19 percentage points;Compared with the latest YOLOv8 and YOLOv9,it is also 2.17 and 1.19 percentage points higher,respectively.The model's average detection time per image was an impressive 15 milliseconds,with a FLOPs(floating point operations per second)count of 13.8 G.After acceleration processing,the detection accuracy of the TDA-YOLO model on the Jetson Xavier NX development board is 90.95%,the mAP value is 91.35%,and the detection time of each image is 21 ms,which can still meet the requirements of real-time detection of tomatoes in dense planting environment.The experimental results show that the proposed TDA-YOLO model can accurately and quickly detect tomatoes in dense planting environment,and at the same time avoid the use of a large number of annotated data,which provides technical support for the development of automatic harvesting systems for tomatoes and other fruits.展开更多
This paper is devoted to the probabilistic stability analysis of a tunnel face excavated in a two-layer soil. The interface of the soil layers is assumed to be positioned above the tunnel roof. In the framework of lim...This paper is devoted to the probabilistic stability analysis of a tunnel face excavated in a two-layer soil. The interface of the soil layers is assumed to be positioned above the tunnel roof. In the framework of limit analysis, a rotational failure mechanism is adopted to describe the face failure considering different shear strength parameters in the two layers. The surrogate Kriging model is introduced to replace the actual performance function to perform a Monte Carlo simulation. An active learning function is used to train the Kriging model which can ensure an efficient tunnel face failure probability prediction without loss of accuracy. The deterministic stability analysis is given to validate the proposed tunnel face failure model. Subsequently, the number of initial sampling points, the correlation coefficient, the distribution type and the coefficient of variability of random variables are discussed to show their influences on the failure probability. The proposed approach is an advisable alternative for the tunnel face stability assessment and can provide guidance for tunnel design.展开更多
Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices...Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.展开更多
Landslide susceptibility mapping is a crucial tool for disaster prevention and management.The performance of conventional data-driven model is greatly influenced by the quality of the samples data.The random selection...Landslide susceptibility mapping is a crucial tool for disaster prevention and management.The performance of conventional data-driven model is greatly influenced by the quality of the samples data.The random selection of negative samples results in the lack of interpretability throughout the assessment process.To address this limitation and construct a high-quality negative samples database,this study introduces a physics-informed machine learning approach,combining the random forest model with Scoops 3D,to optimize the negative samples selection strategy and assess the landslide susceptibility of the study area.The Scoops 3D is employed to determine the factor of safety value leveraging Bishop’s simplified method.Instead of conventional random selection,negative samples are extracted from the areas with a high factor of safety value.Subsequently,the results of conventional random forest model and physics-informed data-driven model are analyzed and discussed,focusing on model performance and prediction uncertainty.In comparison to conventional methods,the physics-informed model,set with a safety area threshold of 3,demonstrates a noteworthy improvement in the mean AUC value by 36.7%,coupled with a reduced prediction uncertainty.It is evident that the determination of the safety area threshold exerts an impact on both prediction uncertainty and model performance.展开更多
As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely...As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.展开更多
基金supported by the National Natural Science Foundation of China(12072090).
文摘This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(No.2022D01B187).
文摘Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金Project(2024JJ2074) supported by the Natural Science Foundation of Hunan Province,ChinaProject(22376221) supported by the National Natural Science Foundation of ChinaProject(2023QNRC001) supported by the Young Elite Scientists Sponsorship Program by CAST,China。
文摘Driven by rapid technological advancements and economic growth,mineral extraction and metal refining have increased dramatically,generating huge volumes of tailings and mine waste(TMWs).Investigating the morphological fractions of heavy metals and metalloids(HMMs)in TMWs is key to evaluating their leaching potential into the environment;however,traditional experiments are time-consuming and labor-intensive.In this study,10 machine learning(ML)algorithms were used and compared for rapidly predicting the morphological fractions of HMMs in TMWs.A dataset comprising 2376 data points was used,with mineral composition,elemental properties,and total concentration used as inputs and concentration of morphological fraction used as output.After grid search optimization,the extra tree model performed the best,achieving coefficient of determination(R2)of 0.946 and 0.942 on the validation and test sets,respectively.Electronegativity was found to have the greatest impact on the morphological fraction.The models’performance was enhanced by applying an ensemble method to the top three optimal ML models,including gradient boosting decision tree,extra trees and categorical boosting.Overall,the proposed framework can accurately predict the concentrations of different morphological fractions of HMMs in TMWs.This approach can minimize detection time,aid in the safe management and recovery of TMWs.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(Grant No.2022D01B 187)。
文摘Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.
基金The National Natural Science Foundation of China (32371993)The Natural Science Research Key Project of Anhui Provincial University(2022AH040125&2023AH040135)The Key Research and Development Plan of Anhui Province (202204c06020022&2023n06020057)。
文摘This study aimed to address the challenge of accurately and reliably detecting tomatoes in dense planting environments,a critical prerequisite for the automation implementation of robotic harvesting.However,the heavy reliance on extensive manually annotated datasets for training deep learning models still poses significant limitations to their application in real-world agricultural production environments.To overcome these limitations,we employed domain adaptive learning approach combined with the YOLOv5 model to develop a novel tomato detection model called as TDA-YOLO(tomato detection domain adaptation).We designated the normal illumination scenes in dense planting environments as the source domain and utilized various other illumination scenes as the target domain.To construct bridge mechanism between source and target domains,neural preset for color style transfer is introduced to generate a pseudo-dataset,which served to deal with domain discrepancy.Furthermore,this study combines the semi-supervised learning method to enable the model to extract domain-invariant features more fully,and uses knowledge distillation to improve the model's ability to adapt to the target domain.Additionally,for purpose of promoting inference speed and low computational demand,the lightweight FasterNet network was integrated into the YOLOv5's C3 module,creating a modified C3_Faster module.The experimental results demonstrated that the proposed TDA-YOLO model significantly outperformed original YOLOv5s model,achieving a mAP(mean average precision)of 96.80%for tomato detection across diverse scenarios in dense planting environments,increasing by 7.19 percentage points;Compared with the latest YOLOv8 and YOLOv9,it is also 2.17 and 1.19 percentage points higher,respectively.The model's average detection time per image was an impressive 15 milliseconds,with a FLOPs(floating point operations per second)count of 13.8 G.After acceleration processing,the detection accuracy of the TDA-YOLO model on the Jetson Xavier NX development board is 90.95%,the mAP value is 91.35%,and the detection time of each image is 21 ms,which can still meet the requirements of real-time detection of tomatoes in dense planting environment.The experimental results show that the proposed TDA-YOLO model can accurately and quickly detect tomatoes in dense planting environment,and at the same time avoid the use of a large number of annotated data,which provides technical support for the development of automatic harvesting systems for tomatoes and other fruits.
基金Projects supported by the China Scholarship Council
文摘This paper is devoted to the probabilistic stability analysis of a tunnel face excavated in a two-layer soil. The interface of the soil layers is assumed to be positioned above the tunnel roof. In the framework of limit analysis, a rotational failure mechanism is adopted to describe the face failure considering different shear strength parameters in the two layers. The surrogate Kriging model is introduced to replace the actual performance function to perform a Monte Carlo simulation. An active learning function is used to train the Kriging model which can ensure an efficient tunnel face failure probability prediction without loss of accuracy. The deterministic stability analysis is given to validate the proposed tunnel face failure model. Subsequently, the number of initial sampling points, the correlation coefficient, the distribution type and the coefficient of variability of random variables are discussed to show their influences on the failure probability. The proposed approach is an advisable alternative for the tunnel face stability assessment and can provide guidance for tunnel design.
基金supported by the National Natural Science Foundation of China(62171088,U19A2052,62020106011)the Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China(ZYGX2021YGLH215,ZYGX2022YGRH005)。
文摘Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.
基金Project(G2022165004L)supported by the High-end Foreign Expert Introduction Program,ChinaProject(2021XM3008)supported by the Special Foundation of Postdoctoral Support Program,Chongqing,China+1 种基金Project(2018-ZL-01)supported by the Sichuan Transportation Science and Technology Project,ChinaProject(HZ2021001)supported by the Chongqing Municipal Education Commission,China。
文摘Landslide susceptibility mapping is a crucial tool for disaster prevention and management.The performance of conventional data-driven model is greatly influenced by the quality of the samples data.The random selection of negative samples results in the lack of interpretability throughout the assessment process.To address this limitation and construct a high-quality negative samples database,this study introduces a physics-informed machine learning approach,combining the random forest model with Scoops 3D,to optimize the negative samples selection strategy and assess the landslide susceptibility of the study area.The Scoops 3D is employed to determine the factor of safety value leveraging Bishop’s simplified method.Instead of conventional random selection,negative samples are extracted from the areas with a high factor of safety value.Subsequently,the results of conventional random forest model and physics-informed data-driven model are analyzed and discussed,focusing on model performance and prediction uncertainty.In comparison to conventional methods,the physics-informed model,set with a safety area threshold of 3,demonstrates a noteworthy improvement in the mean AUC value by 36.7%,coupled with a reduced prediction uncertainty.It is evident that the determination of the safety area threshold exerts an impact on both prediction uncertainty and model performance.
基金supported by the National Natural Science Foundation of China(62375013).
文摘As the core component of inertial navigation systems, fiber optic gyroscope (FOG), with technical advantages such as low power consumption, long lifespan, fast startup speed, and flexible structural design, are widely used in aerospace, unmanned driving, and other fields. However, due to the temper-ature sensitivity of optical devices, the influence of environmen-tal temperature causes errors in FOG, thereby greatly limiting their output accuracy. This work researches on machine-learn-ing based temperature error compensation techniques for FOG. Specifically, it focuses on compensating for the bias errors gen-erated in the fiber ring due to the Shupe effect. This work pro-poses a composite model based on k-means clustering, sup-port vector regression, and particle swarm optimization algo-rithms. And it significantly reduced redundancy within the sam-ples by adopting the interval sequence sample. Moreover, met-rics such as root mean square error (RMSE), mean absolute error (MAE), bias stability, and Allan variance, are selected to evaluate the model’s performance and compensation effective-ness. This work effectively enhances the consistency between data and models across different temperature ranges and tem-perature gradients, improving the bias stability of the FOG from 0.022 °/h to 0.006 °/h. Compared to the existing methods utiliz-ing a single machine learning model, the proposed method increases the bias stability of the compensated FOG from 57.11% to 71.98%, and enhances the suppression of rate ramp noise coefficient from 2.29% to 14.83%. This work improves the accuracy of FOG after compensation, providing theoretical guid-ance and technical references for sensors error compensation work in other fields.