Shock wave caused by a sudden release of high-energy,such as explosion and blast,usually affects a significant range of areas.The utilization of a uniform fine mesh to capture sharp shock wave and to obtain precise re...Shock wave caused by a sudden release of high-energy,such as explosion and blast,usually affects a significant range of areas.The utilization of a uniform fine mesh to capture sharp shock wave and to obtain precise results is inefficient in terms of computational resource.This is particularly evident when large-scale fluid field simulations are conducted with significant differences in computational domain size.In this work,a variable-domain-size adaptive mesh enlargement(vAME)method is developed based on the proposed adaptive mesh enlargement(AME)method for modeling multi-explosives explosion problems.The vAME method reduces the division of numerous empty areas or unnecessary computational domains by adaptively suspending enlargement operation in one or two directions,rather than in all directions as in AME method.A series of numerical tests via AME and vAME with varying nonintegral enlargement ratios and different mesh numbers are simulated to verify the efficiency and order of accuracy.An estimate of speedup ratio is analyzed for further efficiency comparison.Several large-scale near-ground explosion experiments with single/multiple explosives are performed to analyze the shock wave superposition formed by the incident wave,reflected wave,and Mach wave.Additionally,the vAME method is employed to validate the accuracy,as well as to investigate the performance of the fluid field and shock wave propagation,considering explosive quantities ranging from 1 to 5 while maintaining a constant total mass.The results show a satisfactory correlation between the overpressure versus time curves for experiments and numerical simulations.The vAME method yields a competitive efficiency,increasing the computational speed to 3.0 and approximately 120,000 times in comparison to AME and the fully fine mesh method,respectively.It indicates that the vAME method reduces the computational cost with minimal impact on the results for such large-scale high-energy release problems with significant differences in computational domain size.展开更多
How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data cente...How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.展开更多
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework...Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.展开更多
This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key de...This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.展开更多
Estimating trawler fishing effort plays a critical role in characterizing marine fisheries activities,quantifying the ecological impact of trawling,and refining regulatory frameworks and policies.Understanding trawler...Estimating trawler fishing effort plays a critical role in characterizing marine fisheries activities,quantifying the ecological impact of trawling,and refining regulatory frameworks and policies.Understanding trawler fishing inputs offers crucial scientific data to support the sustainable management of offshore fishery resources in China.An XGBoost algorithm was introduced and optimized through Harris Hawks Optimization(HHO),to develop a model for identifying trawler fishing behaviour.The model demonstrated exceptional performance,achieving accuracy,sensitivity,specificity,and the Matthews correlation coefficient of 0.9713,0.9806,0.9632,and 0.9425,respectively.Using this model to detect fishing activities,the fishing effort of trawlers from Shandong Province in the sea area between 119°E to 124°E and 32°N to 40°N in 2021 was quantified.A heatmap depicting fishing effort,generated with a spatial resolution of 1/8°,revealed that fishing activities were predominantly concentrated in two regions:121.1°E to 124°E,35.7°N to 38.7°N,and 119.8°E to 122.8°E,33.6°N to 35.4°N.This research can provide a foundation for quantitative evaluations of fishery resources,which can offer vital data to promote the sustainable development of marine capture fisheries.展开更多
The reverse design of solid rocket motor(SRM)propellant grain involves determining the grain geometry to closely match a predefined internal ballistic curve.While existing reverse design methods are feasible,they ofte...The reverse design of solid rocket motor(SRM)propellant grain involves determining the grain geometry to closely match a predefined internal ballistic curve.While existing reverse design methods are feasible,they often face challenges such as lengthy computation times and limited accuracy.To achieve rapid and accurate matching between the targeted ballistic curve and complex grain shape,this paper proposes a novel reverse design method for SRM propellant grain based on time-series data imaging and convolutional neural network(CNN).First,a finocyl grain shape-internal ballistic curve dataset is created using parametric modeling techniques to comprehensively cover the design space.Next,the internal ballistic time-series data is encoded into three-channel images,establishing a potential relationship between the ballistic curves and their image representations.A CNN is then constructed and trained using these encoded images.Once trained,the model enables efficient inference of propellant grain dimensions from a target internal ballistic curve.This paper conducts comparative experiments across various neural network models,validating the effectiveness of the feature extraction method that transforms internal ballistic time-series data into images,as well as its generalization capability across different CNN architectures.Ignition tests were performed based on the predicted propellant grain.The results demonstrate that the relative error between the experimental internal ballistic curves and the target curves is less than 5%,confirming the validity and feasibility of the proposed reverse design methodology.展开更多
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t...Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.展开更多
Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve ...Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.展开更多
[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-base...[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management.展开更多
A security issue with multi-sensor unmanned aerial vehicle(UAV)cyber physical systems(CPS)from the viewpoint of a false data injection(FDI)attacker is investigated in this paper.The FDI attacker can employ attacks on ...A security issue with multi-sensor unmanned aerial vehicle(UAV)cyber physical systems(CPS)from the viewpoint of a false data injection(FDI)attacker is investigated in this paper.The FDI attacker can employ attacks on feedback and feed-forward channels simultaneously with limited resource.The attacker aims at degrading the UAV CPS's estimation performance to the max while keeping stealthiness characterized by the Kullback-Leibler(K-L)divergence.The attacker is resource limited which can only attack part of sensors,and the attacked sensor as well as specific forms of attack signals at each instant should be considered by the attacker.Also,the sensor selection principle is investigated with respect to time invariant attack covariances.Additionally,the optimal switching attack strategies in regard to time variant attack covariances are modeled as a multi-agent Markov decision process(MDP)with hybrid discrete-continuous action space.Then,the multi-agent MDP is solved by utilizing the deep Multi-agent parameterized Q-networks(MAPQN)method.Ultimately,a quadrotor near hover system is used to validate the effectiveness of the results in the simulation section.展开更多
The decentralized robust guaranteed cost control problem is studied for a class of interconnected singular large-scale systems with time-delay and norm-bounded time-invariant parameter uncertainty under a given quadra...The decentralized robust guaranteed cost control problem is studied for a class of interconnected singular large-scale systems with time-delay and norm-bounded time-invariant parameter uncertainty under a given quadratic cost performance function. The problem that is addressed in this study is to design a decentralized robust guaranteed cost state feedback controller such that the closed-loop system is not only regular, impulse-free and stable, but also guarantees an adequate level of performance for all admissible uncertainties. A sufficient condition for the existence of the decentralized robust guaranteed cost state feedback controllers is proposed in terms of a linear matrix inequality (LMI) via LMI approach. When this condition is feasible, the desired state feedback decentralized robust guaranteed cost controller gain matrices can be obtained. Finally, an illustrative example is provided to demonstrate the effectiveness of the proposed approach.展开更多
The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature cont...The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature control system which integrates temperature calibration and temperature uniformity control is developed for the thermal treatment of aluminum alloy workpieces in the large-scale vertical quench furnace. To obtain the aluminum alloy workpiece temperature, an air heat transfer model is newly established to describe the temperature gradient distribution so that the immeasurable workpiece temperature can be calibrated from the available thermocouple temperature. To satisfy the uniformity control of the furnace temperature, a second order partial differential equation(PDE) is derived to describe the thermal dynamics inside the vertical quench furnace. Based on the PDE, a decoupling matrix is constructed to solve the coupling issue and decouple the heating process into multiple independent heating subsystems. Then, using the expert control rule to find a compromise of temperature rising time and overshoot during the quenching process. The developed temperature control system has been successfully applied to a 31 m large-scale vertical quench furnace, and the industrial running results show the significant improvement of the temperature uniformity, lower overshoot and shortened processing time.展开更多
Numerical analysis of the optimal supporting time and long-term stability index of the surrounding rocks in the underground plant of Xiangjiaba hydro-power station was carried out based on the rheological theory. Firs...Numerical analysis of the optimal supporting time and long-term stability index of the surrounding rocks in the underground plant of Xiangjiaba hydro-power station was carried out based on the rheological theory. Firstly,the mechanical parameters of each rock group were identified from the experimental data; secondly,the rheological calculation and analysis for the cavern in stepped excavation without supporting were made; finally,the optimal time for supporting at the characteristic point in a typical section was obtained while the creep rate and displacement after each excavation step has satisfied the criterion of the optimal supporting time. Excavation was repeated when the optimal time for supporting was identified,and the long-term stability creep time and the maximum creep deformation of the characteristic point were determined in accordance with the criterion of long-term stability index. It is shown that the optimal supporting time of the characteristic point in the underground plant of Xiangjiaba hydro-power station is 5-8 d,the long-term stability time of the typical section is 126 d,and the corresponding largest creep deformation is 24.30 mm. While the cavern is supported,the cavern deformation is significantly reduced and the stress states of the surrounding rock masses are remarkably improved.展开更多
An optimal tracking control (OTC) problem for linear time-delay large-scale systems affected by external persistent disturbances is investigated. Based on the internal model principle, a disturbance compensator is c...An optimal tracking control (OTC) problem for linear time-delay large-scale systems affected by external persistent disturbances is investigated. Based on the internal model principle, a disturbance compensator is constructed. The system with persistent disturbances is transformed into an augmented system without persistent disturbances. The original OTC problem of linear time-delay system is transformed into a sequence of linear two- point boundary value (TPBV) problems by introducing a sensitivity parameter and expanding Maclaurin series around it. By solving an OTC law of the augmented system, the OTC law of the original system is obtained. A numerical simulation is provided to illustrate the effectiveness of the proposed method.展开更多
Decentralized robust stabilization problem of discrete-time fuzzy large-scale systems with parametric uncertainties is considered. This uncertain fuzzy large-scale system consists of N interconnected T-S fuzzy subsyst...Decentralized robust stabilization problem of discrete-time fuzzy large-scale systems with parametric uncertainties is considered. This uncertain fuzzy large-scale system consists of N interconnected T-S fuzzy subsystems, and the parametric uncertainties are unknown but norm-bounded. Based on Lyapunov stability theory and decentralized control theory of large-scale system, the design schema of decentralized parallel distributed compensation (DPDC) fuzzy controllers to ensure the asymptotic stability of the whole fuzzy large-scale system is proposed. The existence conditions for these controllers take the forms of LMIs. Finally a numerical simulation example is given to show the utility of the method proposed.展开更多
Based on the explicit finite element(FE) method and platform of ABAQUS,considering both the inhomogeneity of soils and concave-convex fluctuation of topography,a large-scale refined two-dimensional(2D) FE nonlinear an...Based on the explicit finite element(FE) method and platform of ABAQUS,considering both the inhomogeneity of soils and concave-convex fluctuation of topography,a large-scale refined two-dimensional(2D) FE nonlinear analytical model for Fuzhou Basin was established.The peak ground motion acceleration(PGA) and focusing effect with depth were analyzed.Meanwhile,the results by wave propagation of one-dimensional(1D) layered medium equivalent linearization method were added for contrast.The results show that:1) PGA at different depths are obviously amplified compared to the input ground motion,amplification effect of both funnel-shaped depression and upheaval areas(based on the shape of bedrock surface) present especially remarkable.The 2D results indicate that the PGA displays a non-monotonic decreasing with depth and a greater focusing effect of some particular layers,while the 1D results turn out that the PGA decreases with depth,except that PGA at few particular depth increases abruptly; 2) To the funnel-shaped depression areas,PGA amplification effect above 8 m depth shows relatively larger,to the upheaval areas,PGA amplification effect from 15 m to 25 m depth seems more significant.However,the regularities of the PGA amplification effect could hardly be found in the rest areas; 3) It appears a higher regression rate of PGA amplification coefficient with depth when under a smaller input motion; 4) The frequency spectral characteristic of input motion has noticeable effects on PGA amplification tendency.展开更多
This paper focuses on the problem of non-fragile decentralized guaranteed cost control for uncertain neutral large-scale interconnected systems with time-varying delays in state,control input and interconnections.A no...This paper focuses on the problem of non-fragile decentralized guaranteed cost control for uncertain neutral large-scale interconnected systems with time-varying delays in state,control input and interconnections.A novel scheme,viewing the interconnections with time-varying delays as effective information but not disturbances,is developed.Based on Lyapunov stability theory,using various techniques of decomposing and magnifying matrices,a design method of the non-fragile decentralized guaranteed cost controller for unperturbed neutral large-scale interconnected systems is proposed and the guaranteed cost is presented.The further results are derived for the uncertain case from the criterion of unperturbed neutral large-scale interconnected systems.Finally,an illustrative example shows that the results are significantly better than the existing results in the literatures.展开更多
The most common apparatus used to investigate the load-deformation parameters of homogeneous fine-grained soils is a Casagrande-type oedometer. A typical Casagrande oedometer cell has an internal diameter of 76 mm and...The most common apparatus used to investigate the load-deformation parameters of homogeneous fine-grained soils is a Casagrande-type oedometer. A typical Casagrande oedometer cell has an internal diameter of 76 mm and a height of 19 mm.However, the dimensions of this kind of apparatus do not meet the requirements of some civil engineering applications like studying load-deformation characteristics of specimens with large-diameter particles such as granular materials or municipal solid waste materials. Therefore, it is decided to design and develop a large-scale oedometer with an internal diameter of 490 mm. The new apparatus provides the possibility to evaluate the load-deformation characteristics of soil specimens with different diameter to height ratios. The designed apparatus is able to measure the coefficient of lateral earth pressure at rest. The details and capabilities of the developed oedometer are provided and discussed. To study the performance and efficiency, a number of consolidation tests were performed on Firoozkoh No. 161 sand using the newly developed large scale oedometer made and also the 50 mm diameter Casagrande oedometer. Benchmark test results show that measured consolidation parameters by large scale oedometer are comparable to values measured by Casagrande type oedometer.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.12302435 and 12221002)。
文摘Shock wave caused by a sudden release of high-energy,such as explosion and blast,usually affects a significant range of areas.The utilization of a uniform fine mesh to capture sharp shock wave and to obtain precise results is inefficient in terms of computational resource.This is particularly evident when large-scale fluid field simulations are conducted with significant differences in computational domain size.In this work,a variable-domain-size adaptive mesh enlargement(vAME)method is developed based on the proposed adaptive mesh enlargement(AME)method for modeling multi-explosives explosion problems.The vAME method reduces the division of numerous empty areas or unnecessary computational domains by adaptively suspending enlargement operation in one or two directions,rather than in all directions as in AME method.A series of numerical tests via AME and vAME with varying nonintegral enlargement ratios and different mesh numbers are simulated to verify the efficiency and order of accuracy.An estimate of speedup ratio is analyzed for further efficiency comparison.Several large-scale near-ground explosion experiments with single/multiple explosives are performed to analyze the shock wave superposition formed by the incident wave,reflected wave,and Mach wave.Additionally,the vAME method is employed to validate the accuracy,as well as to investigate the performance of the fluid field and shock wave propagation,considering explosive quantities ranging from 1 to 5 while maintaining a constant total mass.The results show a satisfactory correlation between the overpressure versus time curves for experiments and numerical simulations.The vAME method yields a competitive efficiency,increasing the computational speed to 3.0 and approximately 120,000 times in comparison to AME and the fully fine mesh method,respectively.It indicates that the vAME method reduces the computational cost with minimal impact on the results for such large-scale high-energy release problems with significant differences in computational domain size.
基金supported by the National Natural Science Foundation of China(6120200461272084)+9 种基金the National Key Basic Research Program of China(973 Program)(2011CB302903)the Specialized Research Fund for the Doctoral Program of Higher Education(2009322312000120113223110003)the China Postdoctoral Science Foundation Funded Project(2011M5000952012T50514)the Natural Science Foundation of Jiangsu Province(BK2011754BK2009426)the Jiangsu Postdoctoral Science Foundation Funded Project(1102103C)the Natural Science Fund of Higher Education of Jiangsu Province(12KJB520007)the Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(yx002001)
文摘How to effectively reduce the energy consumption of large-scale data centers is a key issue in cloud computing. This paper presents a novel low-power task scheduling algorithm (L3SA) for large-scale cloud data centers. The winner tree is introduced to make the data nodes as the leaf nodes of the tree and the final winner on the purpose of reducing energy consumption is selected. The complexity of large-scale cloud data centers is fully consider, and the task comparson coefficient is defined to make task scheduling strategy more reasonable. Experiments and performance analysis show that the proposed algorithm can effectively improve the node utilization, and reduce the overall power consumption of the cloud data center.
文摘Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.
基金supported by Poongsan-KAIST Future Research Center Projectthe fund support provided by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(Grant No.2023R1A2C2005661)。
文摘This study presents a machine learning-based method for predicting fragment velocity distribution in warhead fragmentation under explosive loading condition.The fragment resultant velocities are correlated with key design parameters including casing dimensions and detonation positions.The paper details the finite element analysis for fragmentation,the characterizations of the dynamic hardening and fracture models,the generation of comprehensive datasets,and the training of the ANN model.The results show the influence of casing dimensions on fragment velocity distributions,with the tendencies indicating increased resultant velocity with reduced thickness,increased length and diameter.The model's predictive capability is demonstrated through the accurate predictions for both training and testing datasets,showing its potential for the real-time prediction of fragmentation performance.
文摘Estimating trawler fishing effort plays a critical role in characterizing marine fisheries activities,quantifying the ecological impact of trawling,and refining regulatory frameworks and policies.Understanding trawler fishing inputs offers crucial scientific data to support the sustainable management of offshore fishery resources in China.An XGBoost algorithm was introduced and optimized through Harris Hawks Optimization(HHO),to develop a model for identifying trawler fishing behaviour.The model demonstrated exceptional performance,achieving accuracy,sensitivity,specificity,and the Matthews correlation coefficient of 0.9713,0.9806,0.9632,and 0.9425,respectively.Using this model to detect fishing activities,the fishing effort of trawlers from Shandong Province in the sea area between 119°E to 124°E and 32°N to 40°N in 2021 was quantified.A heatmap depicting fishing effort,generated with a spatial resolution of 1/8°,revealed that fishing activities were predominantly concentrated in two regions:121.1°E to 124°E,35.7°N to 38.7°N,and 119.8°E to 122.8°E,33.6°N to 35.4°N.This research can provide a foundation for quantitative evaluations of fishery resources,which can offer vital data to promote the sustainable development of marine capture fisheries.
文摘The reverse design of solid rocket motor(SRM)propellant grain involves determining the grain geometry to closely match a predefined internal ballistic curve.While existing reverse design methods are feasible,they often face challenges such as lengthy computation times and limited accuracy.To achieve rapid and accurate matching between the targeted ballistic curve and complex grain shape,this paper proposes a novel reverse design method for SRM propellant grain based on time-series data imaging and convolutional neural network(CNN).First,a finocyl grain shape-internal ballistic curve dataset is created using parametric modeling techniques to comprehensively cover the design space.Next,the internal ballistic time-series data is encoded into three-channel images,establishing a potential relationship between the ballistic curves and their image representations.A CNN is then constructed and trained using these encoded images.Once trained,the model enables efficient inference of propellant grain dimensions from a target internal ballistic curve.This paper conducts comparative experiments across various neural network models,validating the effectiveness of the feature extraction method that transforms internal ballistic time-series data into images,as well as its generalization capability across different CNN architectures.Ignition tests were performed based on the predicted propellant grain.The results demonstrate that the relative error between the experimental internal ballistic curves and the target curves is less than 5%,confirming the validity and feasibility of the proposed reverse design methodology.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(No.2022D01B187).
文摘Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios.
基金the National Natural Science Foundation of China (Grants No. 12072090 and No.12302056) to provide fund for conducting experiments。
文摘Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.
文摘[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management.
文摘A security issue with multi-sensor unmanned aerial vehicle(UAV)cyber physical systems(CPS)from the viewpoint of a false data injection(FDI)attacker is investigated in this paper.The FDI attacker can employ attacks on feedback and feed-forward channels simultaneously with limited resource.The attacker aims at degrading the UAV CPS's estimation performance to the max while keeping stealthiness characterized by the Kullback-Leibler(K-L)divergence.The attacker is resource limited which can only attack part of sensors,and the attacked sensor as well as specific forms of attack signals at each instant should be considered by the attacker.Also,the sensor selection principle is investigated with respect to time invariant attack covariances.Additionally,the optimal switching attack strategies in regard to time variant attack covariances are modeled as a multi-agent Markov decision process(MDP)with hybrid discrete-continuous action space.Then,the multi-agent MDP is solved by utilizing the deep Multi-agent parameterized Q-networks(MAPQN)method.Ultimately,a quadrotor near hover system is used to validate the effectiveness of the results in the simulation section.
基金Supported by National Basic Research Program of China (973 Program) (2009CB320601), National Natural Science Foundation of China (60774048, 60821063), the Program for Cheung Kong Scholars, and the Research Fund for the Doctoral Program of China Higher Education (20070145015)
文摘这份报纸学习样品数据的问题为有变化时间的延期的不明确的连续时间的模糊大规模系统的可靠 H 夸张控制。第一,模糊夸张模型( FHM )被用来为某些复杂大规模系统建立模型,然后根据 Lyapunov 指导方法和大规模系统的分散的控制理论,线性 matrixine 质量( LMI )基于条件 arederived toguarantee H 性能不仅当所有控制部件正在操作很好时,而且面对一些可能的致动器失败。而且,致动器的精确失败参数没被要求,并且要求仅仅是失败参数的更低、上面的界限。条件依赖于时间延期的上面的界限,并且不依赖于变化时间的延期的衍生物。因此,获得的结果是不太保守的。最后,二个例子被提供说明设计过程和它的有效性。
基金This project was supported by the National Natural Science Foundation of China (60474078)Science Foundation of High Education of Jiangsu of China (04KJD120016).
文摘The decentralized robust guaranteed cost control problem is studied for a class of interconnected singular large-scale systems with time-delay and norm-bounded time-invariant parameter uncertainty under a given quadratic cost performance function. The problem that is addressed in this study is to design a decentralized robust guaranteed cost state feedback controller such that the closed-loop system is not only regular, impulse-free and stable, but also guarantees an adequate level of performance for all admissible uncertainties. A sufficient condition for the existence of the decentralized robust guaranteed cost state feedback controllers is proposed in terms of a linear matrix inequality (LMI) via LMI approach. When this condition is feasible, the desired state feedback decentralized robust guaranteed cost controller gain matrices can be obtained. Finally, an illustrative example is provided to demonstrate the effectiveness of the proposed approach.
基金Project(61174132)supported by the National Natural Science Foundation of ChinaProject(2015zzts047)supported by the Fundamental Research Funds for the Central Universities,ChinaProject(20130162110067)supported by the Research Fund for the Doctoral Program of Higher Education of China
文摘The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature control system which integrates temperature calibration and temperature uniformity control is developed for the thermal treatment of aluminum alloy workpieces in the large-scale vertical quench furnace. To obtain the aluminum alloy workpiece temperature, an air heat transfer model is newly established to describe the temperature gradient distribution so that the immeasurable workpiece temperature can be calibrated from the available thermocouple temperature. To satisfy the uniformity control of the furnace temperature, a second order partial differential equation(PDE) is derived to describe the thermal dynamics inside the vertical quench furnace. Based on the PDE, a decoupling matrix is constructed to solve the coupling issue and decouple the heating process into multiple independent heating subsystems. Then, using the expert control rule to find a compromise of temperature rising time and overshoot during the quenching process. The developed temperature control system has been successfully applied to a 31 m large-scale vertical quench furnace, and the industrial running results show the significant improvement of the temperature uniformity, lower overshoot and shortened processing time.
基金Projects(50911130366, 50979030) supported by the National Natural Science Foundation of ChinaProject(2008BAB29B01) supported by the National Key Technology R&D Program of China
文摘Numerical analysis of the optimal supporting time and long-term stability index of the surrounding rocks in the underground plant of Xiangjiaba hydro-power station was carried out based on the rheological theory. Firstly,the mechanical parameters of each rock group were identified from the experimental data; secondly,the rheological calculation and analysis for the cavern in stepped excavation without supporting were made; finally,the optimal time for supporting at the characteristic point in a typical section was obtained while the creep rate and displacement after each excavation step has satisfied the criterion of the optimal supporting time. Excavation was repeated when the optimal time for supporting was identified,and the long-term stability creep time and the maximum creep deformation of the characteristic point were determined in accordance with the criterion of long-term stability index. It is shown that the optimal supporting time of the characteristic point in the underground plant of Xiangjiaba hydro-power station is 5-8 d,the long-term stability time of the typical section is 126 d,and the corresponding largest creep deformation is 24.30 mm. While the cavern is supported,the cavern deformation is significantly reduced and the stress states of the surrounding rock masses are remarkably improved.
基金supported by the National Natural Science Foundation of China(60574023)the Natural Science Foundation of Shandong Province(Z2005G01).
文摘An optimal tracking control (OTC) problem for linear time-delay large-scale systems affected by external persistent disturbances is investigated. Based on the internal model principle, a disturbance compensator is constructed. The system with persistent disturbances is transformed into an augmented system without persistent disturbances. The original OTC problem of linear time-delay system is transformed into a sequence of linear two- point boundary value (TPBV) problems by introducing a sensitivity parameter and expanding Maclaurin series around it. By solving an OTC law of the augmented system, the OTC law of the original system is obtained. A numerical simulation is provided to illustrate the effectiveness of the proposed method.
基金This project was supported by NSFC Project (60474047), (60334010) and GuangDong Province Natural Science Foundationof China(31406)and China Postdoctoral Science Foundation (20060390725).
文摘Decentralized robust stabilization problem of discrete-time fuzzy large-scale systems with parametric uncertainties is considered. This uncertain fuzzy large-scale system consists of N interconnected T-S fuzzy subsystems, and the parametric uncertainties are unknown but norm-bounded. Based on Lyapunov stability theory and decentralized control theory of large-scale system, the design schema of decentralized parallel distributed compensation (DPDC) fuzzy controllers to ensure the asymptotic stability of the whole fuzzy large-scale system is proposed. The existence conditions for these controllers take the forms of LMIs. Finally a numerical simulation example is given to show the utility of the method proposed.
基金Project(2011CB013601) supported by the National Basic Research Program of ChinaProject(51378258) supported by the National Natural Science Foundation of China
文摘Based on the explicit finite element(FE) method and platform of ABAQUS,considering both the inhomogeneity of soils and concave-convex fluctuation of topography,a large-scale refined two-dimensional(2D) FE nonlinear analytical model for Fuzhou Basin was established.The peak ground motion acceleration(PGA) and focusing effect with depth were analyzed.Meanwhile,the results by wave propagation of one-dimensional(1D) layered medium equivalent linearization method were added for contrast.The results show that:1) PGA at different depths are obviously amplified compared to the input ground motion,amplification effect of both funnel-shaped depression and upheaval areas(based on the shape of bedrock surface) present especially remarkable.The 2D results indicate that the PGA displays a non-monotonic decreasing with depth and a greater focusing effect of some particular layers,while the 1D results turn out that the PGA decreases with depth,except that PGA at few particular depth increases abruptly; 2) To the funnel-shaped depression areas,PGA amplification effect above 8 m depth shows relatively larger,to the upheaval areas,PGA amplification effect from 15 m to 25 m depth seems more significant.However,the regularities of the PGA amplification effect could hardly be found in the rest areas; 3) It appears a higher regression rate of PGA amplification coefficient with depth when under a smaller input motion; 4) The frequency spectral characteristic of input motion has noticeable effects on PGA amplification tendency.
基金supported by the National Natural Science Foundation of China(6057401160972164+1 种基金60904101)the Scientific Research Fund of Liaoning Provincial Education Department(2009A544)
文摘This paper focuses on the problem of non-fragile decentralized guaranteed cost control for uncertain neutral large-scale interconnected systems with time-varying delays in state,control input and interconnections.A novel scheme,viewing the interconnections with time-varying delays as effective information but not disturbances,is developed.Based on Lyapunov stability theory,using various techniques of decomposing and magnifying matrices,a design method of the non-fragile decentralized guaranteed cost controller for unperturbed neutral large-scale interconnected systems is proposed and the guaranteed cost is presented.The further results are derived for the uncertain case from the criterion of unperturbed neutral large-scale interconnected systems.Finally,an illustrative example shows that the results are significantly better than the existing results in the literatures.
基金financial support provided by the Iran University of Science and Technology
文摘The most common apparatus used to investigate the load-deformation parameters of homogeneous fine-grained soils is a Casagrande-type oedometer. A typical Casagrande oedometer cell has an internal diameter of 76 mm and a height of 19 mm.However, the dimensions of this kind of apparatus do not meet the requirements of some civil engineering applications like studying load-deformation characteristics of specimens with large-diameter particles such as granular materials or municipal solid waste materials. Therefore, it is decided to design and develop a large-scale oedometer with an internal diameter of 490 mm. The new apparatus provides the possibility to evaluate the load-deformation characteristics of soil specimens with different diameter to height ratios. The designed apparatus is able to measure the coefficient of lateral earth pressure at rest. The details and capabilities of the developed oedometer are provided and discussed. To study the performance and efficiency, a number of consolidation tests were performed on Firoozkoh No. 161 sand using the newly developed large scale oedometer made and also the 50 mm diameter Casagrande oedometer. Benchmark test results show that measured consolidation parameters by large scale oedometer are comparable to values measured by Casagrande type oedometer.