The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str...The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.展开更多
Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e...Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e.g.,climate change)anthropogenic pressures has benefited considerably from new field-and statistical-techniques.We used machine learning and bibliometric structural topic modelling to identify 20 latent topics comprising four principal fields from a corpus of 16,952 forest ecology/forestry articles published in eight ecology and five forestry journals between 2010 and 2022.Articles published per year increased from 820 in 2010 to 2,354 in 2021,shifting toward more applied topics.Publications from China and some countries in North America and Europe dominated,with relatively fewer articles from some countries in West and Central Africa and West Asia,despite globally important forest resources.Most study sites were in some countries in North America,Central Asia,and South America,and Australia.Articles utilizing R statistical software predominated,increasing from 29.5%in 2010 to 71.4%in 2022.The most frequently used packages included lme4,vegan,nlme,MuMIn,ggplot2,car,MASS,mgcv,multcomp and raster.R was more often used in forest ecology than applied forestry articles.R software offers advantages in script and workflow-sharing compared to other statistical packages.Our findings demonstrate that the disciplines of forest ecology/forestry are expanding both in number and scope,aided by more sophisticated statistical tools,to tackle the challenges of redressing forest habitat loss and the socio-economic impacts of deforestation.展开更多
The safety assessment of high-level radioactive waste repositories requires a high predictive accuracy for radionuclide diffusion and a comprehensive understanding of the diffusion mechanism.In this study,a through-di...The safety assessment of high-level radioactive waste repositories requires a high predictive accuracy for radionuclide diffusion and a comprehensive understanding of the diffusion mechanism.In this study,a through-diffusion method and six machine-learning methods were employed to investigate the diffusion of ReO_(4)^(−),HCrO_(4)^(−),and I−in saturated compacted bentonite under different salinities and compacted dry densities.The machine-learning models were trained using two datasets.One dataset contained six input features and 293 instances obtained from the diffusion database system of the Japan Atomic Energy Agency(JAEA-DDB)and 15 publications.The other dataset,comprising 15,000 pseudo-instances,was produced using a multi-porosity model and contained eight input features.The results indicate that the former dataset yielded a higher predictive accuracy than the latter.Light gradient-boosting exhibited a higher prediction accuracy(R2=0.92)and lower error(MSE=0.01)than the other machine-learning algorithms.In addition,Shapley Additive Explanations,Feature Importance,and Partial Dependence Plot analysis results indicate that the rock capacity factor and compacted dry density had the two most significant effects on predicting the effective diffusion coefficient,thereby offering valuable insights.展开更多
With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a c...With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a compelling avenue. This review uniquely focuses on harnessing the synergy between ML and computational modeling(CM) or optimization tools, as well as integrating multiple ML techniques with CM, for the synthesis of diverse hydrogen evolution reaction(HER) catalysts and various hydrogen production processes(HPPs). Furthermore, this review addresses a notable gap in the literature by offering insights, analyzing challenges, and identifying research prospects and opportunities for sustainable hydrogen production. While the literature reflects a promising landscape for ML applications in hydrogen energy domains, transitioning AI-based algorithms from controlled environments to real-world applications poses significant challenges. Hence, this comprehensive review delves into the technical,practical, and ethical considerations associated with the application of ML in HER catalyst development and HPP optimization. Overall, this review provides guidance for unlocking the transformative potential of ML in enhancing prediction efficiency and sustainability in the hydrogen production sector.展开更多
Machine learning-based surrogate models have significant advantages in terms of computing efficiency. In this paper, we present a pilot study on fast calibration using machine learning techniques. Technology computer-...Machine learning-based surrogate models have significant advantages in terms of computing efficiency. In this paper, we present a pilot study on fast calibration using machine learning techniques. Technology computer-aided design(TCAD) is a powerful simulation tool for electronic devices. This simulation tool has been widely used in the research of radiation effects.However, calibration of TCAD models is time-consuming. In this study, we introduce a fast calibration approach for TCAD model calibration of metal–oxide–semiconductor field-effect transistors(MOSFETs). This approach utilized a machine learning-based surrogate model that was several orders of magnitude faster than the original TCAD simulation. The desired calibration results were obtained within several seconds. In this study, a fundamental model containing 26 parameters is introduced to represent the typical structure of a MOSFET. Classifications were developed to improve the efficiency of the training sample generation. Feature selection techniques were employed to identify important parameters. A surrogate model consisting of a classifier and a regressor was built. A calibration procedure based on the surrogate model was proposed and tested with three calibration goals. Our work demonstrates the feasibility of machine learning-based fast model calibrations for MOSFET. In addition, this study shows that these machine learning techniques learn patterns and correlations from data instead of employing domain expertise. This indicates that machine learning could be an alternative research approach to complement classical physics-based research.展开更多
Traditional Numerical Reservoir Simulation has been contributing to the oil and gas industry for decades.The current state of this technology is the result of decades of research and development by a large number of e...Traditional Numerical Reservoir Simulation has been contributing to the oil and gas industry for decades.The current state of this technology is the result of decades of research and development by a large number of engineers and scientists.Starting in the late 1960s and early 1970s,advances in computer hardware along with development and adaptation of clever algorithms resulted in a paradigm shift in reservoir studies moving them from simplified analogs and analytical solution methods to more mathematically robust computational and numerical solution models.展开更多
With the rapid development of modern wireless communications and radar, antennas and arrays are becoming more complex, therein having, e.g., more degrees of design freedom, integration and fabrication constraints and ...With the rapid development of modern wireless communications and radar, antennas and arrays are becoming more complex, therein having, e.g., more degrees of design freedom, integration and fabrication constraints and design objectives. While fullwave electromagnetic simulation can be very accurate and therefore essential to the design process, it is also very time consuming, which leads to many challenges for antenna design, optimization and sensitivity analysis(SA). Recently, machine-learning-assisted optimization(MLAO) has been widely introduced to accelerate the design process of antennas and arrays. Machine learning(ML) methods, including Gaussian process regression, support vector machine(SVM) and artificial neural networks(ANNs), have been applied to build surrogate models of antennas to achieve fast response prediction. With the help of these ML methods, various MLAO algorithms have been proposed for different applications. A comprehensive survey of recent advances in ML methods for antenna modeling is first presented. Then, algorithms for ML-assisted antenna design, including optimization and SA, are reviewed. Finally, some challenges facing future MLAO for antenna design are discussed.展开更多
Hydro-turbine governing system is a time-varying complex system with strong non-linearity,and its dynamic characteristics are jointly affected by hydraulic,mechanical,electrical,and other factors.Aiming at the stabili...Hydro-turbine governing system is a time-varying complex system with strong non-linearity,and its dynamic characteristics are jointly affected by hydraulic,mechanical,electrical,and other factors.Aiming at the stability of the hydroturbine governing system,this paper first builds a dynamic model of the hydro-turbine governing system through mechanism modeling,and introduces the transfer coefficient characteristics under different load conditions to obtain the stability category of the system.BP neural network is used to perform the machine study and the predictive analysis of the stability of the system under different working conditions is carried out by using the additional momentum method to optimize the algorithm.The test set results show that the method can accurately distinguish the stability category of the hydro-turbine governing system(HTGS),and the research results can provide a theoretical reference for the operation and management of smart hydropower stations in the future.展开更多
We present an efficient and risk-informed closed-loop field development (CLFD) workflow for recurrently revising the field development plan (FDP) using the accrued information. To make the process practical, we integr...We present an efficient and risk-informed closed-loop field development (CLFD) workflow for recurrently revising the field development plan (FDP) using the accrued information. To make the process practical, we integrated multiple concepts of machine learning, an intelligent selection process to discard the worst FDP options and a growing set of representative reservoir models. These concepts were combined and used with a cluster-based learning and evolution optimizer to efficiently explore the search space of decision variables. Unlike previous studies, we also added the execution time of the CLFD workflow and worked with more realistic timelines to confirm the utility of a CLFD workflow. To appreciate the importance of data assimilation and new well-logs in a CLFD workflow, we carried out researches at rigorous conditions without a reduction in uncertainty attributes. The proposed CLFD workflow was implemented on a benchmark analogous to a giant field with extensively time-consuming simulation models. The results underscore that an ensemble with as few as 100 scenarios was sufficient to gauge the geological uncertainty, despite working with a giant field with highly heterogeneous characteristics. It is demonstrated that the CLFD workflow can improve the efficiency by over 85% compared to the previously validated workflow. Finally, we present some acute insights and problems related to data assimilation for the practical application of a CLFD workflow.展开更多
The variable air volume(VAV)air conditioning system is with strong coupling and large time delay,for which model predictive control(MPC)is normally used to pursue performance improvement.Aiming at the difficulty of th...The variable air volume(VAV)air conditioning system is with strong coupling and large time delay,for which model predictive control(MPC)is normally used to pursue performance improvement.Aiming at the difficulty of the parameter selection of VAV MPC controller which is difficult to make the system have a desired response,a novel tuning method based on machine learning and improved particle swarm optimization(PSO)is proposed.In this method,the relationship between MPC controller parameters and time domain performance indices is established via machine learning.Then the PSO is used to optimize MPC controller parameters to get better performance in terms of time domain indices.In addition,the PSO algorithm is further modified under the principle of population attenuation and event triggering to tune parameters of MPC and reduce the computation time of tuning method.Finally,the effectiveness of the proposed method is validated via a hardware-in-the-loop VAV system.展开更多
Occupant behaviour has significant impacts on the performance of machine learning algorithms when predicting building energy consumption.Due to a variety of reasons(e.g.,underperforming building energy management syst...Occupant behaviour has significant impacts on the performance of machine learning algorithms when predicting building energy consumption.Due to a variety of reasons(e.g.,underperforming building energy management systems or restrictions due to privacy policies),the availability of occupational data has long been an obstacle that hinders the performance of machine learning algorithms in predicting building energy consumption.Therefore,this study proposed an agent⁃based machine learning model whereby agent⁃based modelling was employed to generate simulated occupational data as input features for machine learning algorithms for building energy consumption prediction.Boruta feature selection was also introduced in this study to select all relevant features.The results indicated that the performances of machine learning algorithms in predicting building energy consumption were significantly improved when using simulated occupational data,with even greater improvements after conducting Boruta feature selection.展开更多
By using the numerical renormalization group(NRG)method,we construct a large dataset with about one million spectral functions of the Anderson quantum impurity model.The dataset contains the density of states(DOS)of t...By using the numerical renormalization group(NRG)method,we construct a large dataset with about one million spectral functions of the Anderson quantum impurity model.The dataset contains the density of states(DOS)of the host material,the strength of Coulomb interaction between on-site electrons(U),and the hybridization between the host material and the impurity site(Γ).The continued DOS and spectral functions are stored with Chebyshev coefficients and wavelet functions,respectively.From this dataset,we build seven different machine learning networks to predict the spectral function from the input data,DOS,U,andΓ.Three different evaluation indexes,mean absolute error(MAE),relative error(RE)and root mean square error(RMSE),are used to analyze the prediction abilities of different network models.Detailed analysis shows that,for the two kinds of widely used recurrent neural networks(RNNs),gate recurrent unit(GRU)has better performance than the long short term memory(LSTM)network.A combination of bidirectional GRU(BiGRU)and GRU has the best performance among GRU,BiGRU,LSTM,and BiLSTM.The MAE peak of BiGRU+GRU reaches 0.00037.We have also tested a one-dimensional convolutional neural network(1DCNN)with 20 hidden layers and a residual neural network(ResNet),we find that the 1DCNN has almost the same performance of the BiGRU+GRU network for the original dataset,while the robustness testing seems to be a little weak than BiGRU+GRU when we test all these models on two other independent datasets.The ResNet has the worst performance among all the seven network models.The datasets presented in this paper,including the large data set of the spectral function of Anderson quantum impurity model,are openly available at https://doi.org/10.57760/sciencedb.j00113.00192.展开更多
Photonic inverse design concerns the problem of finding photonic structures with target optical properties.However,traditional methods based on optimization algorithms are time-consuming and computationally expensive....Photonic inverse design concerns the problem of finding photonic structures with target optical properties.However,traditional methods based on optimization algorithms are time-consuming and computationally expensive.Recently,deep learning-based approaches have been developed to tackle the problem of inverse design efficiently.Although most of these neural network models have demonstrated high accuracy in different inverse design problems,no previous study has examined the potential effects under given constraints in nanomanufacturing.Additionally,the relative strength of different deep learning-based inverse design approaches has not been fully investigated.Here,we benchmark three commonly used deep learning models in inverse design:Tandem networks,Variational Auto-Encoders,and Generative Adversarial Networks.We provide detailed comparisons in terms of their accuracy,diversity,and robustness.We find that tandem networks and Variational Auto-Encoders give the best accuracy,while Generative Adversarial Networks lead to the most diverse predictions.Our findings could serve as a guideline for researchers to select the model that can best suit their design criteria and fabrication considerations.In addition,our code and data are publicly available,which could be used for future inverse design model development and benchmarking.展开更多
With the rapid advancement of machine learning technology and its growing adoption in research and engineering applications,an increasing number of studies have embraced data-driven approaches for modeling wind turbin...With the rapid advancement of machine learning technology and its growing adoption in research and engineering applications,an increasing number of studies have embraced data-driven approaches for modeling wind turbine wakes.These models leverage the ability to capture complex,high-dimensional characteristics of wind turbine wakes while offering significantly greater efficiency in the prediction process than physics-driven models.As a result,data-driven wind turbine wake models are regarded as powerful and effective tools for predicting wake behavior and turbine power output.This paper aims to provide a concise yet comprehensive review of existing studies on wind turbine wake modeling that employ data-driven approaches.It begins by defining and classifying machine learning methods to facilitate a clearer understanding of the reviewed literature.Subsequently,the related studies are categorized into four key areas:wind turbine power prediction,data-driven analytic wake models,wake field reconstruction,and the incorporation of explicit physical constraints.The accuracy of data-driven models is influenced by two primary factors:the quality of the training data and the performance of the model itself.Accordingly,both data accuracy and model structure are discussed in detail within the review.展开更多
It is of great significance to accurately and rapidly identify shale lithofacies in relation to the evaluation and prediction of sweet spots for shale oil and gas reservoirs.To address the problem of low resolution in...It is of great significance to accurately and rapidly identify shale lithofacies in relation to the evaluation and prediction of sweet spots for shale oil and gas reservoirs.To address the problem of low resolution in logging curves,this study establishes a grayscale-phase model based on high-resolution grayscale curves using clustering analysis algorithms for shale lithofacies identification,working with the Shahejie For-mation,Bohai Bay Basin,China.The grayscale phase is defined as the sum of absolute grayscale and relative amplitude as well as their features.The absolute grayscale is the absolute magnitude of the gray values and is utilized for evaluating the material composition(mineral composition+total organic carbon)of shale,while the relative amplitude is the difference between adjacent gray values and is used to identify the shale structure type.The research results show that the grayscale phase model can identify shale lithofacies well,and the accuracy and applicability of this model were verified by the fitting relationship between absolute grayscale and shale mineral composition,as well as corresponding re-lationships between relative amplitudes and laminae development in shales.Four lithofacies are iden-tified in the target layer of the study area:massive mixed shale,laminated mixed shale,massive calcareous shale and laminated calcareous shale.This method can not only effectively characterize the material composition of shale,but also numerically characterize the development degree of shale laminae,and solve the problem that difficult to identify millimeter-scale laminae based on logging curves,which can provide technical support for shale lithofacies identification,sweet spot evaluation and prediction of complex continental lacustrine basins.展开更多
A biased sampling algorithm for the restricted Boltzmann machine(RBM) is proposed, which allows generating configurations with a conserved quantity. To validate the method, a study of the short-range order in binary a...A biased sampling algorithm for the restricted Boltzmann machine(RBM) is proposed, which allows generating configurations with a conserved quantity. To validate the method, a study of the short-range order in binary alloys with positive and negative exchange interactions is carried out. The network is trained on the data collected by Monte–Carlo simulations for a simple Ising-like binary alloy model and used to calculate the Warren–Cowley short-range order parameter and other thermodynamic properties. We demonstrate that the proposed method allows us not only to correctly reproduce the order parameters for the alloy concentration at which the network was trained, but can also predict them for any other concentrations.展开更多
Interest has recently emerged in potential applications of(n,2n)reactions of unstable nuclei.Challenges have arisen because of the scarcity of experimental cross-sectional data.This study aims to predict the(n,2n)reac...Interest has recently emerged in potential applications of(n,2n)reactions of unstable nuclei.Challenges have arisen because of the scarcity of experimental cross-sectional data.This study aims to predict the(n,2n)reaction cross-section of long-lived fission products based on a tensor model.This tensor model is an extension of the collaborative filtering algorithm used for nuclear data.It is based on tensor decomposition and completion to predict(n,2n)reaction cross-sections;the corresponding EXFOR data are applied as training data.The reliability of the proposed tensor model was validated by comparing the calculations with data from EXFOR and different databases.Predictions were made for long-lived fission products such as^(60)Co,^(79)Se,^(93)Zr,^(107)P,^(126)Sn,and^(137)Cs,which provide a predicted energy range to effectively transmute long-lived fission products into shorter-lived or less radioactive isotopes.This method could be a powerful tool for completing(n,2n)reaction cross-sectional data and shows the possibility of selective transmutation of nuclear waste.展开更多
This paper proposes a robust control scheme based on the sequential convex programming and learning-based model for nonlinear system subjected to additive uncertainties.For the problem of system nonlinearty and unknow...This paper proposes a robust control scheme based on the sequential convex programming and learning-based model for nonlinear system subjected to additive uncertainties.For the problem of system nonlinearty and unknown uncertainties,we study the tube-based model predictive control scheme that makes use of feedforward neural network.Based on the characteristics of the bounded limit of the average cost function while time approaching infinity,a min-max optimization problem(referred to as min-max OP)is formulated to design the controller.The feasibility of this optimization problem and the practical stability of the controlled system are ensured.To demonstrate the efficacy of the proposed approach,a numerical simulation on a double-tank system is conducted.The results of the simulation serve as verification of the effectualness of the proposed scheme.展开更多
基金financial support from the National Key Research and Development Program of China(2021YFB 3501501)the National Natural Science Foundation of China(No.22225803,22038001,22108007 and 22278011)+1 种基金Beijing Natural Science Foundation(No.Z230023)Beijing Science and Technology Commission(No.Z211100004321001).
文摘The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.
基金financially supported by the National Natural Science Foundation of China(31971541).
文摘Forest habitats are critical for biodiversity,ecosystem services,human livelihoods,and well-being.Capacity to conduct theoretical and applied forest ecology research addressing direct(e.g.,deforestation)and indirect(e.g.,climate change)anthropogenic pressures has benefited considerably from new field-and statistical-techniques.We used machine learning and bibliometric structural topic modelling to identify 20 latent topics comprising four principal fields from a corpus of 16,952 forest ecology/forestry articles published in eight ecology and five forestry journals between 2010 and 2022.Articles published per year increased from 820 in 2010 to 2,354 in 2021,shifting toward more applied topics.Publications from China and some countries in North America and Europe dominated,with relatively fewer articles from some countries in West and Central Africa and West Asia,despite globally important forest resources.Most study sites were in some countries in North America,Central Asia,and South America,and Australia.Articles utilizing R statistical software predominated,increasing from 29.5%in 2010 to 71.4%in 2022.The most frequently used packages included lme4,vegan,nlme,MuMIn,ggplot2,car,MASS,mgcv,multcomp and raster.R was more often used in forest ecology than applied forestry articles.R software offers advantages in script and workflow-sharing compared to other statistical packages.Our findings demonstrate that the disciplines of forest ecology/forestry are expanding both in number and scope,aided by more sophisticated statistical tools,to tackle the challenges of redressing forest habitat loss and the socio-economic impacts of deforestation.
基金the Key Program of National Natural Science Foundation of China(No.12335008),the Postgraduate Research and Innovation Project of Huzhou University(No.2023KYCX62)the Scientific Research Fund of Zhejiang Provincial Education Department(No.Y202352712)the Huzhou science and technology planning project(No.2021GZ60)。
文摘The safety assessment of high-level radioactive waste repositories requires a high predictive accuracy for radionuclide diffusion and a comprehensive understanding of the diffusion mechanism.In this study,a through-diffusion method and six machine-learning methods were employed to investigate the diffusion of ReO_(4)^(−),HCrO_(4)^(−),and I−in saturated compacted bentonite under different salinities and compacted dry densities.The machine-learning models were trained using two datasets.One dataset contained six input features and 293 instances obtained from the diffusion database system of the Japan Atomic Energy Agency(JAEA-DDB)and 15 publications.The other dataset,comprising 15,000 pseudo-instances,was produced using a multi-porosity model and contained eight input features.The results indicate that the former dataset yielded a higher predictive accuracy than the latter.Light gradient-boosting exhibited a higher prediction accuracy(R2=0.92)and lower error(MSE=0.01)than the other machine-learning algorithms.In addition,Shapley Additive Explanations,Feature Importance,and Partial Dependence Plot analysis results indicate that the rock capacity factor and compacted dry density had the two most significant effects on predicting the effective diffusion coefficient,thereby offering valuable insights.
基金express their gratitude to the Higher Institution Centre of Excellence (HICoE) fund under the project code (JPT.S(BPKI)2000/016/018/015JId.4(21)/2022002HICOE)Universiti Tenaga Nasional (UNITEN) for funding the research through the (J510050002–IC–6 BOLDREFRESH2025)Akaun Amanah Industri Bekalan Elektrik (AAIBE) Chair of Renewable Energy grant,and NEC Energy Transition Grant (202203003ETG)。
文摘With the projected global surge in hydrogen demand, driven by increasing applications and the imperative for low-emission hydrogen, the integration of machine learning(ML) across the hydrogen energy value chain is a compelling avenue. This review uniquely focuses on harnessing the synergy between ML and computational modeling(CM) or optimization tools, as well as integrating multiple ML techniques with CM, for the synthesis of diverse hydrogen evolution reaction(HER) catalysts and various hydrogen production processes(HPPs). Furthermore, this review addresses a notable gap in the literature by offering insights, analyzing challenges, and identifying research prospects and opportunities for sustainable hydrogen production. While the literature reflects a promising landscape for ML applications in hydrogen energy domains, transitioning AI-based algorithms from controlled environments to real-world applications poses significant challenges. Hence, this comprehensive review delves into the technical,practical, and ethical considerations associated with the application of ML in HER catalyst development and HPP optimization. Overall, this review provides guidance for unlocking the transformative potential of ML in enhancing prediction efficiency and sustainability in the hydrogen production sector.
基金supported by the National Natural Science Foundation of China (Nos. 11690040 and 11690043)。
文摘Machine learning-based surrogate models have significant advantages in terms of computing efficiency. In this paper, we present a pilot study on fast calibration using machine learning techniques. Technology computer-aided design(TCAD) is a powerful simulation tool for electronic devices. This simulation tool has been widely used in the research of radiation effects.However, calibration of TCAD models is time-consuming. In this study, we introduce a fast calibration approach for TCAD model calibration of metal–oxide–semiconductor field-effect transistors(MOSFETs). This approach utilized a machine learning-based surrogate model that was several orders of magnitude faster than the original TCAD simulation. The desired calibration results were obtained within several seconds. In this study, a fundamental model containing 26 parameters is introduced to represent the typical structure of a MOSFET. Classifications were developed to improve the efficiency of the training sample generation. Feature selection techniques were employed to identify important parameters. A surrogate model consisting of a classifier and a regressor was built. A calibration procedure based on the surrogate model was proposed and tested with three calibration goals. Our work demonstrates the feasibility of machine learning-based fast model calibrations for MOSFET. In addition, this study shows that these machine learning techniques learn patterns and correlations from data instead of employing domain expertise. This indicates that machine learning could be an alternative research approach to complement classical physics-based research.
文摘Traditional Numerical Reservoir Simulation has been contributing to the oil and gas industry for decades.The current state of this technology is the result of decades of research and development by a large number of engineers and scientists.Starting in the late 1960s and early 1970s,advances in computer hardware along with development and adaptation of clever algorithms resulted in a paradigm shift in reservoir studies moving them from simplified analogs and analytical solution methods to more mathematically robust computational and numerical solution models.
基金supported in part by the National Key R&D Program of China under grant 2018YFB1801101the National Natural Science Foundation of China under grants 61671145 and 61960206006the Key R&D Program of Jiangsu Province of China under grant BE2018121.
文摘With the rapid development of modern wireless communications and radar, antennas and arrays are becoming more complex, therein having, e.g., more degrees of design freedom, integration and fabrication constraints and design objectives. While fullwave electromagnetic simulation can be very accurate and therefore essential to the design process, it is also very time consuming, which leads to many challenges for antenna design, optimization and sensitivity analysis(SA). Recently, machine-learning-assisted optimization(MLAO) has been widely introduced to accelerate the design process of antennas and arrays. Machine learning(ML) methods, including Gaussian process regression, support vector machine(SVM) and artificial neural networks(ANNs), have been applied to build surrogate models of antennas to achieve fast response prediction. With the help of these ML methods, various MLAO algorithms have been proposed for different applications. A comprehensive survey of recent advances in ML methods for antenna modeling is first presented. Then, algorithms for ML-assisted antenna design, including optimization and SA, are reviewed. Finally, some challenges facing future MLAO for antenna design are discussed.
文摘Hydro-turbine governing system is a time-varying complex system with strong non-linearity,and its dynamic characteristics are jointly affected by hydraulic,mechanical,electrical,and other factors.Aiming at the stability of the hydroturbine governing system,this paper first builds a dynamic model of the hydro-turbine governing system through mechanism modeling,and introduces the transfer coefficient characteristics under different load conditions to obtain the stability category of the system.BP neural network is used to perform the machine study and the predictive analysis of the stability of the system under different working conditions is carried out by using the additional momentum method to optimize the algorithm.The test set results show that the method can accurately distinguish the stability category of the hydro-turbine governing system(HTGS),and the research results can provide a theoretical reference for the operation and management of smart hydropower stations in the future.
文摘We present an efficient and risk-informed closed-loop field development (CLFD) workflow for recurrently revising the field development plan (FDP) using the accrued information. To make the process practical, we integrated multiple concepts of machine learning, an intelligent selection process to discard the worst FDP options and a growing set of representative reservoir models. These concepts were combined and used with a cluster-based learning and evolution optimizer to efficiently explore the search space of decision variables. Unlike previous studies, we also added the execution time of the CLFD workflow and worked with more realistic timelines to confirm the utility of a CLFD workflow. To appreciate the importance of data assimilation and new well-logs in a CLFD workflow, we carried out researches at rigorous conditions without a reduction in uncertainty attributes. The proposed CLFD workflow was implemented on a benchmark analogous to a giant field with extensively time-consuming simulation models. The results underscore that an ensemble with as few as 100 scenarios was sufficient to gauge the geological uncertainty, despite working with a giant field with highly heterogeneous characteristics. It is demonstrated that the CLFD workflow can improve the efficiency by over 85% compared to the previously validated workflow. Finally, we present some acute insights and problems related to data assimilation for the practical application of a CLFD workflow.
基金supported by the National Natural Science Foundation of China(No.61903291)Key Research and Development Program of Shaanxi Province(No.2022NY-094)。
文摘The variable air volume(VAV)air conditioning system is with strong coupling and large time delay,for which model predictive control(MPC)is normally used to pursue performance improvement.Aiming at the difficulty of the parameter selection of VAV MPC controller which is difficult to make the system have a desired response,a novel tuning method based on machine learning and improved particle swarm optimization(PSO)is proposed.In this method,the relationship between MPC controller parameters and time domain performance indices is established via machine learning.Then the PSO is used to optimize MPC controller parameters to get better performance in terms of time domain indices.In addition,the PSO algorithm is further modified under the principle of population attenuation and event triggering to tune parameters of MPC and reduce the computation time of tuning method.Finally,the effectiveness of the proposed method is validated via a hardware-in-the-loop VAV system.
文摘Occupant behaviour has significant impacts on the performance of machine learning algorithms when predicting building energy consumption.Due to a variety of reasons(e.g.,underperforming building energy management systems or restrictions due to privacy policies),the availability of occupational data has long been an obstacle that hinders the performance of machine learning algorithms in predicting building energy consumption.Therefore,this study proposed an agent⁃based machine learning model whereby agent⁃based modelling was employed to generate simulated occupational data as input features for machine learning algorithms for building energy consumption prediction.Boruta feature selection was also introduced in this study to select all relevant features.The results indicated that the performances of machine learning algorithms in predicting building energy consumption were significantly improved when using simulated occupational data,with even greater improvements after conducting Boruta feature selection.
基金Project supported by the National Natural Science Foundation of China(Grant No.12174101)the Fundamental Research Funds for the Central Universities(Grant No.2022MS051)。
文摘By using the numerical renormalization group(NRG)method,we construct a large dataset with about one million spectral functions of the Anderson quantum impurity model.The dataset contains the density of states(DOS)of the host material,the strength of Coulomb interaction between on-site electrons(U),and the hybridization between the host material and the impurity site(Γ).The continued DOS and spectral functions are stored with Chebyshev coefficients and wavelet functions,respectively.From this dataset,we build seven different machine learning networks to predict the spectral function from the input data,DOS,U,andΓ.Three different evaluation indexes,mean absolute error(MAE),relative error(RE)and root mean square error(RMSE),are used to analyze the prediction abilities of different network models.Detailed analysis shows that,for the two kinds of widely used recurrent neural networks(RNNs),gate recurrent unit(GRU)has better performance than the long short term memory(LSTM)network.A combination of bidirectional GRU(BiGRU)and GRU has the best performance among GRU,BiGRU,LSTM,and BiLSTM.The MAE peak of BiGRU+GRU reaches 0.00037.We have also tested a one-dimensional convolutional neural network(1DCNN)with 20 hidden layers and a residual neural network(ResNet),we find that the 1DCNN has almost the same performance of the BiGRU+GRU network for the original dataset,while the robustness testing seems to be a little weak than BiGRU+GRU when we test all these models on two other independent datasets.The ResNet has the worst performance among all the seven network models.The datasets presented in this paper,including the large data set of the spectral function of Anderson quantum impurity model,are openly available at https://doi.org/10.57760/sciencedb.j00113.00192.
文摘Photonic inverse design concerns the problem of finding photonic structures with target optical properties.However,traditional methods based on optimization algorithms are time-consuming and computationally expensive.Recently,deep learning-based approaches have been developed to tackle the problem of inverse design efficiently.Although most of these neural network models have demonstrated high accuracy in different inverse design problems,no previous study has examined the potential effects under given constraints in nanomanufacturing.Additionally,the relative strength of different deep learning-based inverse design approaches has not been fully investigated.Here,we benchmark three commonly used deep learning models in inverse design:Tandem networks,Variational Auto-Encoders,and Generative Adversarial Networks.We provide detailed comparisons in terms of their accuracy,diversity,and robustness.We find that tandem networks and Variational Auto-Encoders give the best accuracy,while Generative Adversarial Networks lead to the most diverse predictions.Our findings could serve as a guideline for researchers to select the model that can best suit their design criteria and fabrication considerations.In addition,our code and data are publicly available,which could be used for future inverse design model development and benchmarking.
基金Supported by the National Natural Science Foundation of China under Grant No.52131102.
文摘With the rapid advancement of machine learning technology and its growing adoption in research and engineering applications,an increasing number of studies have embraced data-driven approaches for modeling wind turbine wakes.These models leverage the ability to capture complex,high-dimensional characteristics of wind turbine wakes while offering significantly greater efficiency in the prediction process than physics-driven models.As a result,data-driven wind turbine wake models are regarded as powerful and effective tools for predicting wake behavior and turbine power output.This paper aims to provide a concise yet comprehensive review of existing studies on wind turbine wake modeling that employ data-driven approaches.It begins by defining and classifying machine learning methods to facilitate a clearer understanding of the reviewed literature.Subsequently,the related studies are categorized into four key areas:wind turbine power prediction,data-driven analytic wake models,wake field reconstruction,and the incorporation of explicit physical constraints.The accuracy of data-driven models is influenced by two primary factors:the quality of the training data and the performance of the model itself.Accordingly,both data accuracy and model structure are discussed in detail within the review.
基金supported by the National Natural Science Foundation of China(42122017,41821002)the Independent Innovation Research Program of China University of Petroleum(East China)(21CX06001A).
文摘It is of great significance to accurately and rapidly identify shale lithofacies in relation to the evaluation and prediction of sweet spots for shale oil and gas reservoirs.To address the problem of low resolution in logging curves,this study establishes a grayscale-phase model based on high-resolution grayscale curves using clustering analysis algorithms for shale lithofacies identification,working with the Shahejie For-mation,Bohai Bay Basin,China.The grayscale phase is defined as the sum of absolute grayscale and relative amplitude as well as their features.The absolute grayscale is the absolute magnitude of the gray values and is utilized for evaluating the material composition(mineral composition+total organic carbon)of shale,while the relative amplitude is the difference between adjacent gray values and is used to identify the shale structure type.The research results show that the grayscale phase model can identify shale lithofacies well,and the accuracy and applicability of this model were verified by the fitting relationship between absolute grayscale and shale mineral composition,as well as corresponding re-lationships between relative amplitudes and laminae development in shales.Four lithofacies are iden-tified in the target layer of the study area:massive mixed shale,laminated mixed shale,massive calcareous shale and laminated calcareous shale.This method can not only effectively characterize the material composition of shale,but also numerically characterize the development degree of shale laminae,and solve the problem that difficult to identify millimeter-scale laminae based on logging curves,which can provide technical support for shale lithofacies identification,sweet spot evaluation and prediction of complex continental lacustrine basins.
基金supported by the financing program AAAA-A16-116021010082-8。
文摘A biased sampling algorithm for the restricted Boltzmann machine(RBM) is proposed, which allows generating configurations with a conserved quantity. To validate the method, a study of the short-range order in binary alloys with positive and negative exchange interactions is carried out. The network is trained on the data collected by Monte–Carlo simulations for a simple Ising-like binary alloy model and used to calculate the Warren–Cowley short-range order parameter and other thermodynamic properties. We demonstrate that the proposed method allows us not only to correctly reproduce the order parameters for the alloy concentration at which the network was trained, but can also predict them for any other concentrations.
基金supported by the Key Laboratory of Nuclear Data foundation(No.JCKY2022201C157)。
文摘Interest has recently emerged in potential applications of(n,2n)reactions of unstable nuclei.Challenges have arisen because of the scarcity of experimental cross-sectional data.This study aims to predict the(n,2n)reaction cross-section of long-lived fission products based on a tensor model.This tensor model is an extension of the collaborative filtering algorithm used for nuclear data.It is based on tensor decomposition and completion to predict(n,2n)reaction cross-sections;the corresponding EXFOR data are applied as training data.The reliability of the proposed tensor model was validated by comparing the calculations with data from EXFOR and different databases.Predictions were made for long-lived fission products such as^(60)Co,^(79)Se,^(93)Zr,^(107)P,^(126)Sn,and^(137)Cs,which provide a predicted energy range to effectively transmute long-lived fission products into shorter-lived or less radioactive isotopes.This method could be a powerful tool for completing(n,2n)reaction cross-sectional data and shows the possibility of selective transmutation of nuclear waste.
文摘This paper proposes a robust control scheme based on the sequential convex programming and learning-based model for nonlinear system subjected to additive uncertainties.For the problem of system nonlinearty and unknown uncertainties,we study the tube-based model predictive control scheme that makes use of feedforward neural network.Based on the characteristics of the bounded limit of the average cost function while time approaching infinity,a min-max optimization problem(referred to as min-max OP)is formulated to design the controller.The feasibility of this optimization problem and the practical stability of the controlled system are ensured.To demonstrate the efficacy of the proposed approach,a numerical simulation on a double-tank system is conducted.The results of the simulation serve as verification of the effectualness of the proposed scheme.