Since the beginning of the 21st century,advances in big data and artificial intelligence have driven a paradigm shift in the geosciences,moving the field from qualitative descriptions toward quantitative analysis,from...Since the beginning of the 21st century,advances in big data and artificial intelligence have driven a paradigm shift in the geosciences,moving the field from qualitative descriptions toward quantitative analysis,from observing phenomena to uncovering underlying mechanisms,from regional-scale investigations to global perspectives,and from experience-based inference toward data-and model-enabled intelligent prediction.AlphaEarth Foundations(AEF)is a next-generation geospatial intelligence platform that addresses these changes by introducing a unified 64-dimensional shared embedding space,enabling-for the first time-standardized representation and seamless integration of 12 distinct types of Earth observation data,including optical,radar,and lidar.This framework significantly improves data assimilation efficiency and resolves the persistent problem of“data silos”in geoscience research.AEF is helping redefine research methodologies and fostering breakthroughs,particularly in quantitative Earth system science.This paper systematically examines how AEF’s innovative architecture-featuring multi-source data fusion,high-dimensional feature representation learning,and a scalable computational framework-facilitates intelligent,precise,and realtime data-driven geoscientific research.Using case studies from resource and environmental applications,we demonstrate AEF’s broad potential and identify emerging innovation needs.Our findings show that AEF not only enhances the efficiency of solving traditional geoscientific problems but also stimulates novel research directions and methodological approaches.展开更多
Cardiac arrest(CA)is a critical condition in the field of cardiovascular medicine.Despite successful resuscitation,patients continue to have a high mortality rate,largely due to post CA syndrome(PCAS).However,the inju...Cardiac arrest(CA)is a critical condition in the field of cardiovascular medicine.Despite successful resuscitation,patients continue to have a high mortality rate,largely due to post CA syndrome(PCAS).However,the injury and pathophysiological mechanisms underlying PCAS remain unclear.Experimental animal models are valuable tools for exploring the etiology,pathogenesis,and potential interventions for CA and PCAS.Current CA animal models include electrical induction of ventricular fibrillation(VF),myocardial infarction,high potassium,asphyxia,and hemorrhagic shock.Although these models do not fully replicate the complexity of clinical CA,the mechanistic insights they provide remain highly relevant,including post-CA brain injury(PCABI),post-CA myocardial dysfunction(PAMD),systemic ischaemia/reperfusion injury(IRI),and the persistent precipitating pathology.Summarizing the methods of establishing CA models,the challenges encountered in the modeling process,and the mechanisms of PCAS can provide a foundation for developing standardized CA modeling protocols.展开更多
Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Sys...Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Systems) etc., in order to shorten the development cost, increase the system co ntrolling accuracy and reduce the complexity of controllers, the reduced order model must be constructed. Even in Virtual Reality (VR), the simulation and d isplay must be in real-time, the model order must be reduced too. The recent advances of MOR research are overviewed in the article. The MOR theor y and methods may be classified as Singular Value decomposition (SVD) based, the Krylov subspace based and others. The merits and demerits of the different meth ods are analyzed, and the existed problems are pointed out. Moreover, the applic ation’s fields are overviewed, and the potential applications are forecaste d. After the existed problems analyzed, the future work is described. There are som e problems in the traditional methods such as SVD and Krylov subspace, they are that it’s difficult to (1)guarantee the stability of the original system, (2) b e adaptive to nonlinear system, and (3) control the modeling accuracy. The f uture works may be solving the above problems on the foundation of the tradition al methods, and applying other methods such as wavelet or signal compression.展开更多
Shock wave caused by a sudden release of high-energy,such as explosion and blast,usually affects a significant range of areas.The utilization of a uniform fine mesh to capture sharp shock wave and to obtain precise re...Shock wave caused by a sudden release of high-energy,such as explosion and blast,usually affects a significant range of areas.The utilization of a uniform fine mesh to capture sharp shock wave and to obtain precise results is inefficient in terms of computational resource.This is particularly evident when large-scale fluid field simulations are conducted with significant differences in computational domain size.In this work,a variable-domain-size adaptive mesh enlargement(vAME)method is developed based on the proposed adaptive mesh enlargement(AME)method for modeling multi-explosives explosion problems.The vAME method reduces the division of numerous empty areas or unnecessary computational domains by adaptively suspending enlargement operation in one or two directions,rather than in all directions as in AME method.A series of numerical tests via AME and vAME with varying nonintegral enlargement ratios and different mesh numbers are simulated to verify the efficiency and order of accuracy.An estimate of speedup ratio is analyzed for further efficiency comparison.Several large-scale near-ground explosion experiments with single/multiple explosives are performed to analyze the shock wave superposition formed by the incident wave,reflected wave,and Mach wave.Additionally,the vAME method is employed to validate the accuracy,as well as to investigate the performance of the fluid field and shock wave propagation,considering explosive quantities ranging from 1 to 5 while maintaining a constant total mass.The results show a satisfactory correlation between the overpressure versus time curves for experiments and numerical simulations.The vAME method yields a competitive efficiency,increasing the computational speed to 3.0 and approximately 120,000 times in comparison to AME and the fully fine mesh method,respectively.It indicates that the vAME method reduces the computational cost with minimal impact on the results for such large-scale high-energy release problems with significant differences in computational domain size.展开更多
Concrete material model plays an important role in numerical predictions of its dynamic responses subjected to projectile impact and charge explosion.Current concrete material models could be distinguished into two ki...Concrete material model plays an important role in numerical predictions of its dynamic responses subjected to projectile impact and charge explosion.Current concrete material models could be distinguished into two kinds,i.e.,the hydro-elastoplastic-damage model with independent equation of state and the cap-elastoplastic-damage model with continuous cap surface.The essential differences between the two kind models are vital for researchers to choose an appropriate kind of concrete material model for their concerned problems,while existing studies have contradictory conclusions.To resolve this issue,the constitutive theories of the two kinds of models are firstly overviewed.Then,the constitutive theories between the two kinds of models are comprehensively compared and the main similarities and differences are clarified,which are demonstrated by single element numerical examples.Finally,numerical predictions for projectile penetration and charge explosion experiments on concrete targets are compared to further demonstrate the conclusion made by constitutive comparison.It is found that both the two kind models could be used to simulate the dynamic responses of concrete under projectile impact and blast loadings,if the parameter needed in material models are well calibrated,although some discrepancies between them may exist.展开更多
Architecture framework has become an effective method recently to describe the system of systems(SoS)architecture,such as the United States(US)Department of Defense Architecture Framework Version 2.0(DoDAF2.0).As a vi...Architecture framework has become an effective method recently to describe the system of systems(SoS)architecture,such as the United States(US)Department of Defense Architecture Framework Version 2.0(DoDAF2.0).As a viewpoint in DoDAF2.0,the operational viewpoint(OV)describes operational activities,nodes,and resource flows.The OV models are important for SoS architecture development.However,as the SoS complexity increases,constructing OV models with traditional methods exposes shortcomings,such as inefficient data collection and low modeling standards.Therefore,we propose an intelligent modeling method for five OV models,including operational resource flow OV-2,organizational relationships OV-4,operational activity hierarchy OV-5a,operational activities model OV-5b,and operational activity sequences OV-6c.The main idea of the method is to extract OV architecture data from text and generate interoperable OV models.First,we construct the OV meta model based on the DoDAF2.0 meta model(DM2).Second,OV architecture named entities is recognized from text based on the bidirectional long short-term memory and conditional random field(BiLSTM-CRF)model.And OV architecture relationships are collected with relationship extraction rules.Finally,we define the generation rules for OV models and develop an OV modeling tool.We use unmanned surface vehicles(USV)swarm target defense SoS architecture as a case to verify the feasibility and effectiveness of the intelligent modeling method.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature cont...The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature control system which integrates temperature calibration and temperature uniformity control is developed for the thermal treatment of aluminum alloy workpieces in the large-scale vertical quench furnace. To obtain the aluminum alloy workpiece temperature, an air heat transfer model is newly established to describe the temperature gradient distribution so that the immeasurable workpiece temperature can be calibrated from the available thermocouple temperature. To satisfy the uniformity control of the furnace temperature, a second order partial differential equation(PDE) is derived to describe the thermal dynamics inside the vertical quench furnace. Based on the PDE, a decoupling matrix is constructed to solve the coupling issue and decouple the heating process into multiple independent heating subsystems. Then, using the expert control rule to find a compromise of temperature rising time and overshoot during the quenching process. The developed temperature control system has been successfully applied to a 31 m large-scale vertical quench furnace, and the industrial running results show the significant improvement of the temperature uniformity, lower overshoot and shortened processing time.展开更多
Computer simulation models may by used to gain further information about missile performance variability. Model validation is an important aspect of the test program for a missile system. Validation provides a basis f...Computer simulation models may by used to gain further information about missile performance variability. Model validation is an important aspect of the test program for a missile system. Validation provides a basis for confidence in the model's results and is a necessary step if the model is to be used to draw inference about the behavior of the real missile. This paper is a review of methods useful for validation of computer simulation models of missile systems and provides a new method with high degree of confidence for validation of computer simulation models of missile systems. Some examples of the use of the new method in validating computer simulation models are given.展开更多
In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction models, the equivalence and unbiasedness of grey prediction models are analyzed and verified. The results sh...In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction models, the equivalence and unbiasedness of grey prediction models are analyzed and verified. The results show that all the grey prediction models that are strictly derived from x^(0)(k) +az^(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homogeneous exponential sequence can be accomplished. However, the models derived from dx^(1)/dt + ax^(1)= b are only close to those derived from x^(0)(k) + az^(1)(k) = b provided that |a| has to satisfy|a| 0.1; neither could the unbiased simulation for the homogeneous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.展开更多
Experimental study is performed on the probabilistic models for the long fatigue crack growth rates (da/dN) of LZ50 axle steel. An equation for crack growth rate was derived to consider the trend of stress intensity f...Experimental study is performed on the probabilistic models for the long fatigue crack growth rates (da/dN) of LZ50 axle steel. An equation for crack growth rate was derived to consider the trend of stress intensity factor range going down to the threshold and the average stress effect. The probabilistic models were presented on the equation. They consist of the probabilistic da/dN-ΔK relations, the confidence-based da/dN-ΔK relations, and the probabilistic- and confidence-based da/dN-ΔK relations. Efforts were made respectively to characterize the effects of probabilistic assessments due to the scattering regularity of test data, the number of sampling, and both of them. These relations can provide wide selections for practice. Analysis on the test data of LZ50 steel indicates that the present models are available and feasible.展开更多
The decentralized robust guaranteed cost control problem is studied for a class of interconnected singular large-scale systems with time-delay and norm-bounded time-invariant parameter uncertainty under a given quadra...The decentralized robust guaranteed cost control problem is studied for a class of interconnected singular large-scale systems with time-delay and norm-bounded time-invariant parameter uncertainty under a given quadratic cost performance function. The problem that is addressed in this study is to design a decentralized robust guaranteed cost state feedback controller such that the closed-loop system is not only regular, impulse-free and stable, but also guarantees an adequate level of performance for all admissible uncertainties. A sufficient condition for the existence of the decentralized robust guaranteed cost state feedback controllers is proposed in terms of a linear matrix inequality (LMI) via LMI approach. When this condition is feasible, the desired state feedback decentralized robust guaranteed cost controller gain matrices can be obtained. Finally, an illustrative example is provided to demonstrate the effectiveness of the proposed approach.展开更多
Based on the explicit finite element(FE) method and platform of ABAQUS,considering both the inhomogeneity of soils and concave-convex fluctuation of topography,a large-scale refined two-dimensional(2D) FE nonlinear an...Based on the explicit finite element(FE) method and platform of ABAQUS,considering both the inhomogeneity of soils and concave-convex fluctuation of topography,a large-scale refined two-dimensional(2D) FE nonlinear analytical model for Fuzhou Basin was established.The peak ground motion acceleration(PGA) and focusing effect with depth were analyzed.Meanwhile,the results by wave propagation of one-dimensional(1D) layered medium equivalent linearization method were added for contrast.The results show that:1) PGA at different depths are obviously amplified compared to the input ground motion,amplification effect of both funnel-shaped depression and upheaval areas(based on the shape of bedrock surface) present especially remarkable.The 2D results indicate that the PGA displays a non-monotonic decreasing with depth and a greater focusing effect of some particular layers,while the 1D results turn out that the PGA decreases with depth,except that PGA at few particular depth increases abruptly; 2) To the funnel-shaped depression areas,PGA amplification effect above 8 m depth shows relatively larger,to the upheaval areas,PGA amplification effect from 15 m to 25 m depth seems more significant.However,the regularities of the PGA amplification effect could hardly be found in the rest areas; 3) It appears a higher regression rate of PGA amplification coefficient with depth when under a smaller input motion; 4) The frequency spectral characteristic of input motion has noticeable effects on PGA amplification tendency.展开更多
The contained underground explosion (CUE) usually generates huge number of aftershocks. This kind of after-shocks induced by three CUEs was investigated in the paper. The conclusions show that the duration of aftersho...The contained underground explosion (CUE) usually generates huge number of aftershocks. This kind of after-shocks induced by three CUEs was investigated in the paper. The conclusions show that the duration of aftershock waveforms are rather short, 70 percent of them range from 2 to 7; the occurrences of the aftershocks conform to negative power function, which has the power of -1.6. The aftershock sequence attenuates a little bit faster, with power of -1.0, within two weeks of post-explosions. During the early stage of post-explosions the aftershocks show up in a cluster, however, they usually show up individually during the late stage of post-explosions. The number of aftershocks generated by the compatible explosions differs by several times because of different me-dium and geological structure; within one month after an explosion with Richater magnitude of 5.5, the number of aftershocks attenuates to the background. Hereafter there are still tiny numbers of aftershocks.展开更多
The results of recent geothermobarometric and geochronological investigations of scarce eclogites of the NW Himalaya (Tso Morari (Ladakh), India and Kaghan Valley, Pakistan) have caused a major rethink of tectonometam...The results of recent geothermobarometric and geochronological investigations of scarce eclogites of the NW Himalaya (Tso Morari (Ladakh), India and Kaghan Valley, Pakistan) have caused a major rethink of tectonometamorphic models for India\|Asia collision. Numerous petrologic studies have been undertaken on the age and origin of metamorphism in the Higher Himalayan Crystallines (HHC) and Lesser Himalaya formations (LH) and their relationship to granite magmatism and movements along the Main Central Thrust (MCT) and South Tibetan Detachment Fault (STDF). However, all of these events are essentially Miocene (or younger) in age and can clearly be distinguished from subduction and exhumation processes undergone by the eclogites which are of Eocene age (Tonarini et al. 1993; Spencer & Gebauer; 1996; de Sigoyer et al. 1999) and relate to the very early stages of the collision. Eclogites of eastern Ladakh are mafic lenses found in granitic gneisses (Ordovician intrusive age: Girard & Bussy 1999) and their surrounding late Pre\|Cambrian to early Cambrian sedimentary units in the Tso Morari dome (see Steck et al. 1998). Detailed petrological and geochronological studies (Guillot et al. 1997; de Sigoyer et al. 1997, 1999) have identified an eclogite facies stage (2000±300)MPa, (580±60)℃ followed by isothermal decompression associated with glaucophane growth at around (1100±200)MPa. Dating of different phases by different methods yielded ages around 55Ma for this stage ((55±17) Ma, U\|Pb, Aln; (55±12) Ma, Lu\|Hf, Grt\|Cpx\|Rt; (55±7) Ma, Sm\|Nd, Grt\|Gln\|Rt). A subsequent amphibolite facies overprint at slightly higher temperature (610±70)℃ was dated at 45~48Ma (metabasite: (47±11) Ma, Sm\|Nd, Grt\|Hbl; metapelite: (45±4) Ma, Rb\|Sr, Mu\|Ap\|WR and (48±2) Ma, Ar\|Ar, Phe). By (30±1) Ma (Ar\|Ar, Bt\|Mu) retrogression into the greenschist facies had occurred (de Sigoyer et al. 1999). These data indicate a two stage history with early exhumation being much faster (>4mm/a) than the later evolution (1~2mm/a).展开更多
The popularly used circulant matrix model of deconvolution is mostly heavily ill-posed or singular and it is not suitable to many blind deconvolution problems. The aperiodic matrix model can improve the condition numb...The popularly used circulant matrix model of deconvolution is mostly heavily ill-posed or singular and it is not suitable to many blind deconvolution problems. The aperiodic matrix model can improve the condition number of deconvolution problems and its accommodation is much wider than the circulant one's. This paper discusses a comparison of the two models including their ill-posedness, the rationality of the approximation by the models, and their computational efficiency. The comparison shows that the aperiodic model is promising in the development of new restoration algorithms.展开更多
基金National Natural Science Foundation of China Key Project(No.42050103)Higher Education Disciplinary Innovation Program(No.B25052)+2 种基金the Guangdong Pearl River Talent Program Innovative and Entrepreneurial Team Project(No.2021ZT09H399)the Ministry of Education’s Frontiers Science Center for Deep-Time Digital Earth(DDE)(No.2652023001)Geological Survey Project of China Geological Survey(DD20240206201)。
文摘Since the beginning of the 21st century,advances in big data and artificial intelligence have driven a paradigm shift in the geosciences,moving the field from qualitative descriptions toward quantitative analysis,from observing phenomena to uncovering underlying mechanisms,from regional-scale investigations to global perspectives,and from experience-based inference toward data-and model-enabled intelligent prediction.AlphaEarth Foundations(AEF)is a next-generation geospatial intelligence platform that addresses these changes by introducing a unified 64-dimensional shared embedding space,enabling-for the first time-standardized representation and seamless integration of 12 distinct types of Earth observation data,including optical,radar,and lidar.This framework significantly improves data assimilation efficiency and resolves the persistent problem of“data silos”in geoscience research.AEF is helping redefine research methodologies and fostering breakthroughs,particularly in quantitative Earth system science.This paper systematically examines how AEF’s innovative architecture-featuring multi-source data fusion,high-dimensional feature representation learning,and a scalable computational framework-facilitates intelligent,precise,and realtime data-driven geoscientific research.Using case studies from resource and environmental applications,we demonstrate AEF’s broad potential and identify emerging innovation needs.Our findings show that AEF not only enhances the efficiency of solving traditional geoscientific problems but also stimulates novel research directions and methodological approaches.
基金supported by the National Key Research and Development Program(2021YFC3002205)the Postgraduate Research and Innovation Program of Tianjin Municipal Education Commission(2022BKY113),China.
文摘Cardiac arrest(CA)is a critical condition in the field of cardiovascular medicine.Despite successful resuscitation,patients continue to have a high mortality rate,largely due to post CA syndrome(PCAS).However,the injury and pathophysiological mechanisms underlying PCAS remain unclear.Experimental animal models are valuable tools for exploring the etiology,pathogenesis,and potential interventions for CA and PCAS.Current CA animal models include electrical induction of ventricular fibrillation(VF),myocardial infarction,high potassium,asphyxia,and hemorrhagic shock.Although these models do not fully replicate the complexity of clinical CA,the mechanistic insights they provide remain highly relevant,including post-CA brain injury(PCABI),post-CA myocardial dysfunction(PAMD),systemic ischaemia/reperfusion injury(IRI),and the persistent precipitating pathology.Summarizing the methods of establishing CA models,the challenges encountered in the modeling process,and the mechanisms of PCAS can provide a foundation for developing standardized CA modeling protocols.
文摘Model Order Reduction (MOR) plays more and more imp or tant role in complex system simulation, design and control recently. For example , for the large-size space structures, VLSI and MEMS (Micro-ElectroMechanical Systems) etc., in order to shorten the development cost, increase the system co ntrolling accuracy and reduce the complexity of controllers, the reduced order model must be constructed. Even in Virtual Reality (VR), the simulation and d isplay must be in real-time, the model order must be reduced too. The recent advances of MOR research are overviewed in the article. The MOR theor y and methods may be classified as Singular Value decomposition (SVD) based, the Krylov subspace based and others. The merits and demerits of the different meth ods are analyzed, and the existed problems are pointed out. Moreover, the applic ation’s fields are overviewed, and the potential applications are forecaste d. After the existed problems analyzed, the future work is described. There are som e problems in the traditional methods such as SVD and Krylov subspace, they are that it’s difficult to (1)guarantee the stability of the original system, (2) b e adaptive to nonlinear system, and (3) control the modeling accuracy. The f uture works may be solving the above problems on the foundation of the tradition al methods, and applying other methods such as wavelet or signal compression.
基金supported by the National Natural Science Foundation of China(Grant Nos.12302435 and 12221002)。
文摘Shock wave caused by a sudden release of high-energy,such as explosion and blast,usually affects a significant range of areas.The utilization of a uniform fine mesh to capture sharp shock wave and to obtain precise results is inefficient in terms of computational resource.This is particularly evident when large-scale fluid field simulations are conducted with significant differences in computational domain size.In this work,a variable-domain-size adaptive mesh enlargement(vAME)method is developed based on the proposed adaptive mesh enlargement(AME)method for modeling multi-explosives explosion problems.The vAME method reduces the division of numerous empty areas or unnecessary computational domains by adaptively suspending enlargement operation in one or two directions,rather than in all directions as in AME method.A series of numerical tests via AME and vAME with varying nonintegral enlargement ratios and different mesh numbers are simulated to verify the efficiency and order of accuracy.An estimate of speedup ratio is analyzed for further efficiency comparison.Several large-scale near-ground explosion experiments with single/multiple explosives are performed to analyze the shock wave superposition formed by the incident wave,reflected wave,and Mach wave.Additionally,the vAME method is employed to validate the accuracy,as well as to investigate the performance of the fluid field and shock wave propagation,considering explosive quantities ranging from 1 to 5 while maintaining a constant total mass.The results show a satisfactory correlation between the overpressure versus time curves for experiments and numerical simulations.The vAME method yields a competitive efficiency,increasing the computational speed to 3.0 and approximately 120,000 times in comparison to AME and the fully fine mesh method,respectively.It indicates that the vAME method reduces the computational cost with minimal impact on the results for such large-scale high-energy release problems with significant differences in computational domain size.
基金supported by the National Natural Science Foundations of China (Grant Nos. 52178515, 52078133)
文摘Concrete material model plays an important role in numerical predictions of its dynamic responses subjected to projectile impact and charge explosion.Current concrete material models could be distinguished into two kinds,i.e.,the hydro-elastoplastic-damage model with independent equation of state and the cap-elastoplastic-damage model with continuous cap surface.The essential differences between the two kind models are vital for researchers to choose an appropriate kind of concrete material model for their concerned problems,while existing studies have contradictory conclusions.To resolve this issue,the constitutive theories of the two kinds of models are firstly overviewed.Then,the constitutive theories between the two kinds of models are comprehensively compared and the main similarities and differences are clarified,which are demonstrated by single element numerical examples.Finally,numerical predictions for projectile penetration and charge explosion experiments on concrete targets are compared to further demonstrate the conclusion made by constitutive comparison.It is found that both the two kind models could be used to simulate the dynamic responses of concrete under projectile impact and blast loadings,if the parameter needed in material models are well calibrated,although some discrepancies between them may exist.
基金National Natural Science Foundation of China(71690233,71971213,71901214)。
文摘Architecture framework has become an effective method recently to describe the system of systems(SoS)architecture,such as the United States(US)Department of Defense Architecture Framework Version 2.0(DoDAF2.0).As a viewpoint in DoDAF2.0,the operational viewpoint(OV)describes operational activities,nodes,and resource flows.The OV models are important for SoS architecture development.However,as the SoS complexity increases,constructing OV models with traditional methods exposes shortcomings,such as inefficient data collection and low modeling standards.Therefore,we propose an intelligent modeling method for five OV models,including operational resource flow OV-2,organizational relationships OV-4,operational activity hierarchy OV-5a,operational activities model OV-5b,and operational activity sequences OV-6c.The main idea of the method is to extract OV architecture data from text and generate interoperable OV models.First,we construct the OV meta model based on the DoDAF2.0 meta model(DM2).Second,OV architecture named entities is recognized from text based on the bidirectional long short-term memory and conditional random field(BiLSTM-CRF)model.And OV architecture relationships are collected with relationship extraction rules.Finally,we define the generation rules for OV models and develop an OV modeling tool.We use unmanned surface vehicles(USV)swarm target defense SoS architecture as a case to verify the feasibility and effectiveness of the intelligent modeling method.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金Project(61174132)supported by the National Natural Science Foundation of ChinaProject(2015zzts047)supported by the Fundamental Research Funds for the Central Universities,ChinaProject(20130162110067)supported by the Research Fund for the Doctoral Program of Higher Education of China
文摘The temperature control of the large-scale vertical quench furnace is very difficult due to its huge volume and complex thermal exchanges. To meet the technical requirement of the quenching process, a temperature control system which integrates temperature calibration and temperature uniformity control is developed for the thermal treatment of aluminum alloy workpieces in the large-scale vertical quench furnace. To obtain the aluminum alloy workpiece temperature, an air heat transfer model is newly established to describe the temperature gradient distribution so that the immeasurable workpiece temperature can be calibrated from the available thermocouple temperature. To satisfy the uniformity control of the furnace temperature, a second order partial differential equation(PDE) is derived to describe the thermal dynamics inside the vertical quench furnace. Based on the PDE, a decoupling matrix is constructed to solve the coupling issue and decouple the heating process into multiple independent heating subsystems. Then, using the expert control rule to find a compromise of temperature rising time and overshoot during the quenching process. The developed temperature control system has been successfully applied to a 31 m large-scale vertical quench furnace, and the industrial running results show the significant improvement of the temperature uniformity, lower overshoot and shortened processing time.
文摘Computer simulation models may by used to gain further information about missile performance variability. Model validation is an important aspect of the test program for a missile system. Validation provides a basis for confidence in the model's results and is a necessary step if the model is to be used to draw inference about the behavior of the real missile. This paper is a review of methods useful for validation of computer simulation models of missile systems and provides a new method with high degree of confidence for validation of computer simulation models of missile systems. Some examples of the use of the new method in validating computer simulation models are given.
基金supported by the National Natural Science Foundation of China(1147105951375517+5 种基金71271226)the China Postdoctoral Science Foundation Funded Project(2014M560712)Chongqing Frontier and Applied Basic Research Project(cstc2014jcyj A00024)the Ministry of Education of Humanities and Social Sciences Youth Foundation(14YJAZH033)the Chongqing Municipal Education Scientific Planning Project(2012-GX-142)the Higher School Teaching Reform Research Project in Chongqing(1202010)
文摘In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction models, the equivalence and unbiasedness of grey prediction models are analyzed and verified. The results show that all the grey prediction models that are strictly derived from x^(0)(k) +az^(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homogeneous exponential sequence can be accomplished. However, the models derived from dx^(1)/dt + ax^(1)= b are only close to those derived from x^(0)(k) + az^(1)(k) = b provided that |a| has to satisfy|a| 0.1; neither could the unbiased simulation for the homogeneous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.
基金Project supported by the National Natural Science Foundation of China (Nos.50375130and50323003), the Special Foundation of National Excellent Ph.D.Thesis (No.200234) and thePlanned Itemforthe Outstanding Young Teachers ofMinistry ofEducationofChina (No.2101)
文摘Experimental study is performed on the probabilistic models for the long fatigue crack growth rates (da/dN) of LZ50 axle steel. An equation for crack growth rate was derived to consider the trend of stress intensity factor range going down to the threshold and the average stress effect. The probabilistic models were presented on the equation. They consist of the probabilistic da/dN-ΔK relations, the confidence-based da/dN-ΔK relations, and the probabilistic- and confidence-based da/dN-ΔK relations. Efforts were made respectively to characterize the effects of probabilistic assessments due to the scattering regularity of test data, the number of sampling, and both of them. These relations can provide wide selections for practice. Analysis on the test data of LZ50 steel indicates that the present models are available and feasible.
基金This project was supported by the National Natural Science Foundation of China (60474078)Science Foundation of High Education of Jiangsu of China (04KJD120016).
文摘The decentralized robust guaranteed cost control problem is studied for a class of interconnected singular large-scale systems with time-delay and norm-bounded time-invariant parameter uncertainty under a given quadratic cost performance function. The problem that is addressed in this study is to design a decentralized robust guaranteed cost state feedback controller such that the closed-loop system is not only regular, impulse-free and stable, but also guarantees an adequate level of performance for all admissible uncertainties. A sufficient condition for the existence of the decentralized robust guaranteed cost state feedback controllers is proposed in terms of a linear matrix inequality (LMI) via LMI approach. When this condition is feasible, the desired state feedback decentralized robust guaranteed cost controller gain matrices can be obtained. Finally, an illustrative example is provided to demonstrate the effectiveness of the proposed approach.
基金Project(2011CB013601) supported by the National Basic Research Program of ChinaProject(51378258) supported by the National Natural Science Foundation of China
文摘Based on the explicit finite element(FE) method and platform of ABAQUS,considering both the inhomogeneity of soils and concave-convex fluctuation of topography,a large-scale refined two-dimensional(2D) FE nonlinear analytical model for Fuzhou Basin was established.The peak ground motion acceleration(PGA) and focusing effect with depth were analyzed.Meanwhile,the results by wave propagation of one-dimensional(1D) layered medium equivalent linearization method were added for contrast.The results show that:1) PGA at different depths are obviously amplified compared to the input ground motion,amplification effect of both funnel-shaped depression and upheaval areas(based on the shape of bedrock surface) present especially remarkable.The 2D results indicate that the PGA displays a non-monotonic decreasing with depth and a greater focusing effect of some particular layers,while the 1D results turn out that the PGA decreases with depth,except that PGA at few particular depth increases abruptly; 2) To the funnel-shaped depression areas,PGA amplification effect above 8 m depth shows relatively larger,to the upheaval areas,PGA amplification effect from 15 m to 25 m depth seems more significant.However,the regularities of the PGA amplification effect could hardly be found in the rest areas; 3) It appears a higher regression rate of PGA amplification coefficient with depth when under a smaller input motion; 4) The frequency spectral characteristic of input motion has noticeable effects on PGA amplification tendency.
文摘The contained underground explosion (CUE) usually generates huge number of aftershocks. This kind of after-shocks induced by three CUEs was investigated in the paper. The conclusions show that the duration of aftershock waveforms are rather short, 70 percent of them range from 2 to 7; the occurrences of the aftershocks conform to negative power function, which has the power of -1.6. The aftershock sequence attenuates a little bit faster, with power of -1.0, within two weeks of post-explosions. During the early stage of post-explosions the aftershocks show up in a cluster, however, they usually show up individually during the late stage of post-explosions. The number of aftershocks generated by the compatible explosions differs by several times because of different me-dium and geological structure; within one month after an explosion with Richater magnitude of 5.5, the number of aftershocks attenuates to the background. Hereafter there are still tiny numbers of aftershocks.
文摘The results of recent geothermobarometric and geochronological investigations of scarce eclogites of the NW Himalaya (Tso Morari (Ladakh), India and Kaghan Valley, Pakistan) have caused a major rethink of tectonometamorphic models for India\|Asia collision. Numerous petrologic studies have been undertaken on the age and origin of metamorphism in the Higher Himalayan Crystallines (HHC) and Lesser Himalaya formations (LH) and their relationship to granite magmatism and movements along the Main Central Thrust (MCT) and South Tibetan Detachment Fault (STDF). However, all of these events are essentially Miocene (or younger) in age and can clearly be distinguished from subduction and exhumation processes undergone by the eclogites which are of Eocene age (Tonarini et al. 1993; Spencer & Gebauer; 1996; de Sigoyer et al. 1999) and relate to the very early stages of the collision. Eclogites of eastern Ladakh are mafic lenses found in granitic gneisses (Ordovician intrusive age: Girard & Bussy 1999) and their surrounding late Pre\|Cambrian to early Cambrian sedimentary units in the Tso Morari dome (see Steck et al. 1998). Detailed petrological and geochronological studies (Guillot et al. 1997; de Sigoyer et al. 1997, 1999) have identified an eclogite facies stage (2000±300)MPa, (580±60)℃ followed by isothermal decompression associated with glaucophane growth at around (1100±200)MPa. Dating of different phases by different methods yielded ages around 55Ma for this stage ((55±17) Ma, U\|Pb, Aln; (55±12) Ma, Lu\|Hf, Grt\|Cpx\|Rt; (55±7) Ma, Sm\|Nd, Grt\|Gln\|Rt). A subsequent amphibolite facies overprint at slightly higher temperature (610±70)℃ was dated at 45~48Ma (metabasite: (47±11) Ma, Sm\|Nd, Grt\|Hbl; metapelite: (45±4) Ma, Rb\|Sr, Mu\|Ap\|WR and (48±2) Ma, Ar\|Ar, Phe). By (30±1) Ma (Ar\|Ar, Bt\|Mu) retrogression into the greenschist facies had occurred (de Sigoyer et al. 1999). These data indicate a two stage history with early exhumation being much faster (>4mm/a) than the later evolution (1~2mm/a).
文摘The popularly used circulant matrix model of deconvolution is mostly heavily ill-posed or singular and it is not suitable to many blind deconvolution problems. The aperiodic matrix model can improve the condition number of deconvolution problems and its accommodation is much wider than the circulant one's. This paper discusses a comparison of the two models including their ill-posedness, the rationality of the approximation by the models, and their computational efficiency. The comparison shows that the aperiodic model is promising in the development of new restoration algorithms.