The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational per...The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions.展开更多
Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str...The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.展开更多
Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-at...Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-atom catalysts (SACs) on two-dimensional (2D) materials,are essential.This study presents a few-shot machine learning (ML) assisted high-throughput screening of 2D septuple-atomic-layer Ga_(2)CoS_(4-x)supported SACs to predict HER catalytic activity.Initially,density functional theory (DFT)calculations showed that 2D Ga_(2)CoS4is inactive for HER.However,defective Ga_(2)CoS_(4-x)(x=0–0.25)monolayers exhibit excellent HER activity due to surface sulfur vacancies (SVs),with predicted overpotentials (0–60 mV) comparable to or lower than commercial Pt/C,which typically exhibits an overpotential of around 50 m V in the acidic electrolyte,when the concentration of surface SV is lower than 8.3%.SVs generate spin-polarized states near the Fermi level,making them effective HER sites.We demonstrate ML-accelerated HER overpotential predictions for all transition metal SACs on 2D Ga_(2)CoS_(4-x).Using DFT data from 18 SACs,an ML model with high prediction accuracy and reduced computation time was developed.An intrinsic descriptor linking SAC atomic properties to HER overpotential was identified.This study thus provides a framework for screening SACs on 2D materials,enhancing catalyst design.展开更多
In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated ...In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications.展开更多
Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, ex...Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, expensive, and time-consuming. Machine learning(ML) techniques enable the proficient extraction of information from datasets, allowing the development of realistic models that are capable of predicting the efficacy of materials with commendable accuracy. The PM6 donor has great potential for high-performance OSCs. However, it is crucial for the rational design of a ternary blend to accurately forecast the power conversion efficiency(PCE) of ternary OSCs(TOSCs) based on a PM6 donor.Accordingly, we collected the device parameters of PM6-based TOSCs and evaluated the feature importance of their molecule descriptors to develop predictive models. In this study, we used five different ML algorithms for analysis and prediction. For the analysis, the classification and regression tree provided different rules, heuristics, and patterns from the heterogeneous dataset. The random forest algorithm outperforms other prediction ML algorithms in predicting the output performance of PM6-based TOSCs. Finally, we validated the ML outcomes by fabricating PM6-based TOSCs. Our study presents a rapid strategy for assessing a high PCE while elucidating the substantial influence of diverse descriptors.展开更多
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode...Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.展开更多
Leveraging big data analytics and advanced algorithms to accelerate and optimize the process of molecular and materials design, synthesis, and application has revolutionized the field of molecular and materials scienc...Leveraging big data analytics and advanced algorithms to accelerate and optimize the process of molecular and materials design, synthesis, and application has revolutionized the field of molecular and materials science, allowing researchers to gain a deeper understanding of material properties and behaviors,leading to the development of new materials that are more efficient and reliable. However, the difficulty in constructing large-scale datasets of new molecules/materials due to the high cost of data acquisition and annotation limits the development of conventional machine learning(ML) approaches. Knowledgereused transfer learning(TL) methods are expected to break this dilemma. The application of TL lowers the data requirements for model training, which makes TL stand out in researches addressing data quality issues. In this review, we summarize recent progress in TL related to molecular and materials. We focus on the application of TL methods for the discovery of advanced molecules/materials, particularly, the construction of TL frameworks for different systems, and how TL can enhance the performance of models. In addition, the challenges of TL are also discussed.展开更多
Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still...Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still a lack of models for predicting adsorption energies on oxides,due to the complexity of elemental species and the ambiguous coordination environment.This work proposes an active learning workflow(LeNN)founded on local electronic transfer features(e)and the principle of coordinate rotation invariance.By accurately characterizing the electron transfer to adsorption site atoms and their surrounding geometric structures,LeNN mitigates abrupt feature changes due to different element types and clarifies coordination environments.As a result,it enables the prediction of^(*)H adsorption energy on binary oxide surfaces with a mean absolute error(MAE)below 0.18 eV.Moreover,we incorporate local coverage(θ_(l))and leverage neutral network ensemble to establish an active learning workflow,attaining a prediction MAE below 0.2 eV for 5419 multi-^(*)H adsorption structures.These findings validate the universality and capability of the proposed features in predicting^(*)H adsorption energy on binary oxide surfaces.展开更多
With the rapid development of the Internet,network security and data privacy are increasingly valued.Although classical Network Intrusion Detection System(NIDS)based on Deep Learning(DL)models can provide good detecti...With the rapid development of the Internet,network security and data privacy are increasingly valued.Although classical Network Intrusion Detection System(NIDS)based on Deep Learning(DL)models can provide good detection accuracy,but collecting samples for centralized training brings the huge risk of data privacy leakage.Furthermore,the training of supervised deep learning models requires a large number of labeled samples,which is usually cumbersome.The“black-box”problem also makes the DL models of NIDS untrustworthy.In this paper,we propose a trusted Federated Learning(FL)Traffic IDS method called FL-TIDS to address the above-mentioned problems.In FL-TIDS,we design an unsupervised intrusion detection model based on autoencoders that alleviates the reliance on marked samples.At the same time,we use FL for model training to protect data privacy.In addition,we design an improved SHAP interpretable method based on chi-square test to perform interpretable analysis of the trained model.We conducted several experiments to evaluate the proposed FL-TIDS.We first determine experimentally the structure and the number of neurons of the unsupervised AE model.Secondly,we evaluated the proposed method using the UNSW-NB15 and CICIDS2017 datasets.The exper-imental results show that the unsupervised AE model has better performance than the other 7 intrusion detection models in terms of precision,recall and f1-score.Then,federated learning is used to train the intrusion detection model.The experimental results indicate that the model is more accurate than the local learning model.Finally,we use an improved SHAP explainability method based on Chi-square test to analyze the explainability.The analysis results show that the identification characteristics of the model are consistent with the attack characteristics,and the model is reliable.展开更多
Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the rece...Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the received signal strength indication(RSSI)distance is accord with the location distance.Therefore,how to efficiently match the current RSSI of the user with the RSSI in the fingerprint database is the key to achieve high-accuracy localization.In this paper,a particle swarm optimization-extreme learning machine(PSO-ELM)algorithm is proposed on the basis of the original fingerprinting localization.Firstly,we collect the RSSI of the experimental area to construct the fingerprint database,and the ELM algorithm is applied to the online stages to determine the corresponding relation between the location of the terminal and the RSSI it receives.Secondly,PSO algorithm is used to improve the bias and weight of ELM neural network,and the global optimal results are obtained.Finally,extensive simulation results are presented.It is shown that the proposed algorithm can effectively reduce mean error of localization and improve positioning accuracy when compared with K-Nearest Neighbor(KNN),Kmeans and Back-propagation(BP)algorithms.展开更多
In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep rei...In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy.展开更多
The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elem...The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elements. Aiming at the problem of insufficient accuracy of the existing physical models for predicting the peak overpressure of ground reflected waves, two physics-informed machine learning models are constructed. The results demonstrate that the machine learning models, which incorporate physical information by predicting the deviation between the physical model and actual values and adding a physical loss term in the loss function, can accurately predict both the training and out-oftraining dataset. Compared to existing physical models, the average relative error in the predicted training domain is reduced from 17.459%-48.588% to 2%, and the proportion of average relative error less than 20% increased from 0% to 59.4% to more than 99%. In addition, the relative average error outside the prediction training set range is reduced from 14.496%-29.389% to 5%, and the proportion of relative average error less than 20% increased from 0% to 71.39% to more than 99%. The inclusion of a physical loss term enforcing monotonicity in the loss function effectively improves the extrapolation performance of machine learning. The findings of this study provide valuable reference for explosion hazard assessment and anti-explosion structural design in various fields.展开更多
Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve it...Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods.展开更多
Pipeline isolation plugging robot (PIPR) is an important tool in pipeline maintenance operation. During the plugging process, the violent vibration will occur by the flow field, which can cause serious damage to the p...Pipeline isolation plugging robot (PIPR) is an important tool in pipeline maintenance operation. During the plugging process, the violent vibration will occur by the flow field, which can cause serious damage to the pipeline and PIPR. In this paper, we propose a dynamic regulating strategy to reduce the plugging-induced vibration by regulating the spoiler angle and plugging velocity. Firstly, the dynamic plugging simulation and experiment are performed to study the flow field changes during dynamic plugging. And the pressure difference is proposed to evaluate the degree of flow field vibration. Secondly, the mathematical models of pressure difference with plugging states and spoiler angles are established based on the extreme learning machine (ELM) optimized by improved sparrow search algorithm (ISSA). Finally, a modified Q-learning algorithm based on simulated annealing is applied to determine the optimal strategy for the spoiler angle and plugging velocity in real time. The results show that the proposed method can reduce the plugging-induced vibration by 19.9% and 32.7% on average, compared with single-regulating methods. This study can effectively ensure the stability of the plugging process.展开更多
Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In...Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method.展开更多
In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on traini...In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on training a shared machine learning model locally,eliminating the need for uploading raw data to a central server.It is perhaps the only training paradigm that preserves the privacy of user data,which is essential for computing environments as personal as the metaverse.However,the original FL architecture proposed is not scalable to a large number of user devices in the metaverse community.To mitigate this problem,hierarchical federated learning(HFL)has been introduced as a general distributed learning paradigm,inspiring a number of research works.In this paper,we present several types of HFL architectures,with a special focus on the three-layer client-edge-cloud HFL architecture,which is most pertinent to the metaverse due to its delay-sensitive nature.We also examine works that take advantage of the natural layered organization of three-layer client-edge-cloud HFL to tackle some of the most challenging problems in FL within the metaverse.Finally,we outline some future research directions of HFL in the metaverse.展开更多
The recent wave of the artificial intelligence(AI)revolution has aroused unprecedented interest in the intelligentialize of human society.As an essential component that bridges the physical world and digital signals,f...The recent wave of the artificial intelligence(AI)revolution has aroused unprecedented interest in the intelligentialize of human society.As an essential component that bridges the physical world and digital signals,flexible sensors are evolving from a single sensing element to a smarter system,which is capable of highly efficient acquisition,analysis,and even perception of vast,multifaceted data.While challenging from a manual perspective,the development of intelligent flexible sensing has been remarkably facilitated owing to the rapid advances of brain-inspired AI innovations from both the algorithm(machine learning)and the framework(artificial synapses)level.This review presents the recent progress of the emerging AI-driven,intelligent flexible sensing systems.The basic concept of machine learning and artificial synapses are introduced.The new enabling features induced by the fusion of AI and flexible sensing are comprehensively reviewed,which significantly advances the applications such as flexible sensory systems,soft/humanoid robotics,and human activity monitoring.As two of the most profound innovations in the twenty-first century,the deep incorporation of flexible sensing and AI technology holds tremendous potential for creating a smarter world for human beings.展开更多
In-situ upgrading by heating is feasible for low-maturity shale oil,where the pore space dynamically evolves.We characterize this response for a heated substrate concurrently imaged by SEM.We systematically follow the...In-situ upgrading by heating is feasible for low-maturity shale oil,where the pore space dynamically evolves.We characterize this response for a heated substrate concurrently imaged by SEM.We systematically follow the evolution of pore quantity,size(length,width and cross-sectional area),orientation,shape(aspect ratio,roundness and solidity)and their anisotropy—interpreted by machine learning.Results indicate that heating generates new pores in both organic matter and inorganic minerals.However,the newly formed pores are smaller than the original pores and thus reduce average lengths and widths of the bedding-parallel pore system.Conversely,the average pore lengths and widths are increased in the bedding-perpendicular direction.Besides,heating increases the cross-sectional area of pores in low-maturity oil shales,where this growth tendency fluctuates at<300℃ but becomes steady at>300℃.In addition,the orientation and shape of the newly-formed heating-induced pores follow the habit of the original pores and follow the initial probability distributions of pore orientation and shape.Herein,limited anisotropy is detected in pore direction and shape,indicating similar modes of evolution both bedding-parallel and bedding-normal.We propose a straightforward but robust model to describe evolution of pore system in low-maturity oil shales during heating.展开更多
We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adapt...We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adaptive semantic transmission in terms of different channel states.We combine the classic broadcast approach with the image transformer to implement this adaptive joint source and channel coding(JSCC)scheme.Specifically,we utilize the neural network(NN)to jointly optimize the hierarchical image compression and superposition code mapping within this scheme.The learned transformers and codebooks allow recovering of the image with an adaptive quality and low error rate at the receiver side,in each channel state.The simulation results exhibit our proposed scheme can dynamically adapt the coding to the current channel state and outperform some existing intelligent schemes with the fixed coding block.展开更多
基金National Natural Science Foundation of China (52075420)Fundamental Research Funds for the Central Universities (xzy022023049)National Key Research and Development Program of China (2023YFB3408600)。
文摘The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions.
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金financial support from the National Key Research and Development Program of China(2021YFB 3501501)the National Natural Science Foundation of China(No.22225803,22038001,22108007 and 22278011)+1 种基金Beijing Natural Science Foundation(No.Z230023)Beijing Science and Technology Commission(No.Z211100004321001).
文摘The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.
文摘Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-atom catalysts (SACs) on two-dimensional (2D) materials,are essential.This study presents a few-shot machine learning (ML) assisted high-throughput screening of 2D septuple-atomic-layer Ga_(2)CoS_(4-x)supported SACs to predict HER catalytic activity.Initially,density functional theory (DFT)calculations showed that 2D Ga_(2)CoS4is inactive for HER.However,defective Ga_(2)CoS_(4-x)(x=0–0.25)monolayers exhibit excellent HER activity due to surface sulfur vacancies (SVs),with predicted overpotentials (0–60 mV) comparable to or lower than commercial Pt/C,which typically exhibits an overpotential of around 50 m V in the acidic electrolyte,when the concentration of surface SV is lower than 8.3%.SVs generate spin-polarized states near the Fermi level,making them effective HER sites.We demonstrate ML-accelerated HER overpotential predictions for all transition metal SACs on 2D Ga_(2)CoS_(4-x).Using DFT data from 18 SACs,an ML model with high prediction accuracy and reduced computation time was developed.An intrinsic descriptor linking SAC atomic properties to HER overpotential was identified.This study thus provides a framework for screening SACs on 2D materials,enhancing catalyst design.
基金supported by the National Natural Science Foundation of China(No.12005198).
文摘In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications.
基金National Research Foundation of Korea (NRF) grant (No. 2016R1A3B 1908249) funded by the Korean government。
文摘Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, expensive, and time-consuming. Machine learning(ML) techniques enable the proficient extraction of information from datasets, allowing the development of realistic models that are capable of predicting the efficacy of materials with commendable accuracy. The PM6 donor has great potential for high-performance OSCs. However, it is crucial for the rational design of a ternary blend to accurately forecast the power conversion efficiency(PCE) of ternary OSCs(TOSCs) based on a PM6 donor.Accordingly, we collected the device parameters of PM6-based TOSCs and evaluated the feature importance of their molecule descriptors to develop predictive models. In this study, we used five different ML algorithms for analysis and prediction. For the analysis, the classification and regression tree provided different rules, heuristics, and patterns from the heterogeneous dataset. The random forest algorithm outperforms other prediction ML algorithms in predicting the output performance of PM6-based TOSCs. Finally, we validated the ML outcomes by fabricating PM6-based TOSCs. Our study presents a rapid strategy for assessing a high PCE while elucidating the substantial influence of diverse descriptors.
文摘Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.
基金National Key R&D Program of China (No. 2021YFC2100100)Shanghai Science and Technology Project (No. 21JC1403400, 23JC1402300)。
文摘Leveraging big data analytics and advanced algorithms to accelerate and optimize the process of molecular and materials design, synthesis, and application has revolutionized the field of molecular and materials science, allowing researchers to gain a deeper understanding of material properties and behaviors,leading to the development of new materials that are more efficient and reliable. However, the difficulty in constructing large-scale datasets of new molecules/materials due to the high cost of data acquisition and annotation limits the development of conventional machine learning(ML) approaches. Knowledgereused transfer learning(TL) methods are expected to break this dilemma. The application of TL lowers the data requirements for model training, which makes TL stand out in researches addressing data quality issues. In this review, we summarize recent progress in TL related to molecular and materials. We focus on the application of TL methods for the discovery of advanced molecules/materials, particularly, the construction of TL frameworks for different systems, and how TL can enhance the performance of models. In addition, the challenges of TL are also discussed.
基金supported by the National Natural Science Foundation of China(No.52488201)the Natural Science Basic Research Program of Shaanxi(No.2024JC-YBMS-284)+1 种基金the Key Research and Development Program of Shaanxi(No.2024GHYBXM-02)the Fundamental Research Funds for the Central Universities.
文摘Machine learning combined with density functional theory(DFT)enables rapid exploration of catalyst descriptors space such as adsorption energy,facilitating rapid and effective catalyst screening.However,there is still a lack of models for predicting adsorption energies on oxides,due to the complexity of elemental species and the ambiguous coordination environment.This work proposes an active learning workflow(LeNN)founded on local electronic transfer features(e)and the principle of coordinate rotation invariance.By accurately characterizing the electron transfer to adsorption site atoms and their surrounding geometric structures,LeNN mitigates abrupt feature changes due to different element types and clarifies coordination environments.As a result,it enables the prediction of^(*)H adsorption energy on binary oxide surfaces with a mean absolute error(MAE)below 0.18 eV.Moreover,we incorporate local coverage(θ_(l))and leverage neutral network ensemble to establish an active learning workflow,attaining a prediction MAE below 0.2 eV for 5419 multi-^(*)H adsorption structures.These findings validate the universality and capability of the proposed features in predicting^(*)H adsorption energy on binary oxide surfaces.
基金supported by National Natural Science Fundation of China under Grant 61972208National Natural Science Fundation(General Program)of China under Grant 61972211+2 种基金National Key Research and Development Project of China under Grant 2020YFB1804700Future Network Innovation Research and Application Projects under Grant No.2021FNA020062021 Jiangsu Postgraduate Research Innovation Plan under Grant No.KYCX210794.
文摘With the rapid development of the Internet,network security and data privacy are increasingly valued.Although classical Network Intrusion Detection System(NIDS)based on Deep Learning(DL)models can provide good detection accuracy,but collecting samples for centralized training brings the huge risk of data privacy leakage.Furthermore,the training of supervised deep learning models requires a large number of labeled samples,which is usually cumbersome.The“black-box”problem also makes the DL models of NIDS untrustworthy.In this paper,we propose a trusted Federated Learning(FL)Traffic IDS method called FL-TIDS to address the above-mentioned problems.In FL-TIDS,we design an unsupervised intrusion detection model based on autoencoders that alleviates the reliance on marked samples.At the same time,we use FL for model training to protect data privacy.In addition,we design an improved SHAP interpretable method based on chi-square test to perform interpretable analysis of the trained model.We conducted several experiments to evaluate the proposed FL-TIDS.We first determine experimentally the structure and the number of neurons of the unsupervised AE model.Secondly,we evaluated the proposed method using the UNSW-NB15 and CICIDS2017 datasets.The exper-imental results show that the unsupervised AE model has better performance than the other 7 intrusion detection models in terms of precision,recall and f1-score.Then,federated learning is used to train the intrusion detection model.The experimental results indicate that the model is more accurate than the local learning model.Finally,we use an improved SHAP explainability method based on Chi-square test to analyze the explainability.The analysis results show that the identification characteristics of the model are consistent with the attack characteristics,and the model is reliable.
基金supported in part by the National Natural Science Foundation of China(U2001213 and 61971191)in part by the Beijing Natural Science Foundation under Grant L182018 and L201011+2 种基金in part by National Key Research and Development Project(2020YFB1807204)in part by the Key project of Natural Science Foundation of Jiangxi Province(20202ACBL202006)in part by the Innovation Fund Designated for Graduate Students of Jiangxi Province(YC2020-S321)。
文摘Wi Fi and fingerprinting localization method have been a hot topic in indoor positioning because of their universality and location-related features.The basic assumption of fingerprinting localization is that the received signal strength indication(RSSI)distance is accord with the location distance.Therefore,how to efficiently match the current RSSI of the user with the RSSI in the fingerprint database is the key to achieve high-accuracy localization.In this paper,a particle swarm optimization-extreme learning machine(PSO-ELM)algorithm is proposed on the basis of the original fingerprinting localization.Firstly,we collect the RSSI of the experimental area to construct the fingerprint database,and the ELM algorithm is applied to the online stages to determine the corresponding relation between the location of the terminal and the RSSI it receives.Secondly,PSO algorithm is used to improve the bias and weight of ELM neural network,and the global optimal results are obtained.Finally,extensive simulation results are presented.It is shown that the proposed algorithm can effectively reduce mean error of localization and improve positioning accuracy when compared with K-Nearest Neighbor(KNN),Kmeans and Back-propagation(BP)algorithms.
基金Supported by the China National Petroleum Corporation Limited-China University of Petroleum(Beijing)Strategic Cooperation Science and Technology Project(ZLZX2020-03).
文摘In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy.
文摘The accurate prediction of peak overpressure of explosion shockwaves is significant in fields such as explosion hazard assessment and structural protection, where explosion shockwaves serve as typical destructive elements. Aiming at the problem of insufficient accuracy of the existing physical models for predicting the peak overpressure of ground reflected waves, two physics-informed machine learning models are constructed. The results demonstrate that the machine learning models, which incorporate physical information by predicting the deviation between the physical model and actual values and adding a physical loss term in the loss function, can accurately predict both the training and out-oftraining dataset. Compared to existing physical models, the average relative error in the predicted training domain is reduced from 17.459%-48.588% to 2%, and the proportion of average relative error less than 20% increased from 0% to 59.4% to more than 99%. In addition, the relative average error outside the prediction training set range is reduced from 14.496%-29.389% to 5%, and the proportion of relative average error less than 20% increased from 0% to 71.39% to more than 99%. The inclusion of a physical loss term enforcing monotonicity in the loss function effectively improves the extrapolation performance of machine learning. The findings of this study provide valuable reference for explosion hazard assessment and anti-explosion structural design in various fields.
基金supported in part by National Key Research and Development Program of China under Grant 2021YFB2900404.
文摘Due to the limited computational capability and the diversity of the Internet of Things devices working in different environment,we consider fewshot learning-based automatic modulation classification(AMC)to improve its reliability.A data enhancement module(DEM)is designed by a convolutional layer to supplement frequency-domain information as well as providing nonlinear mapping that is beneficial for AMC.Multimodal network is designed to have multiple residual blocks,where each residual block has multiple convolutional kernels of different sizes for diverse feature extraction.Moreover,a deep supervised loss function is designed to supervise all parts of the network including the hidden layers and the DEM.Since different model may output different results,cooperative classifier is designed to avoid the randomness of single model and improve the reliability.Simulation results show that this few-shot learning-based AMC method can significantly improve the AMC accuracy compared to the existing methods.
基金This work was financially supported by the National Natural Science Foundation of China(Grant No.51575528)the Science Foundation of China University of Petroleum,Beijing(No.2462022QEDX011).
文摘Pipeline isolation plugging robot (PIPR) is an important tool in pipeline maintenance operation. During the plugging process, the violent vibration will occur by the flow field, which can cause serious damage to the pipeline and PIPR. In this paper, we propose a dynamic regulating strategy to reduce the plugging-induced vibration by regulating the spoiler angle and plugging velocity. Firstly, the dynamic plugging simulation and experiment are performed to study the flow field changes during dynamic plugging. And the pressure difference is proposed to evaluate the degree of flow field vibration. Secondly, the mathematical models of pressure difference with plugging states and spoiler angles are established based on the extreme learning machine (ELM) optimized by improved sparrow search algorithm (ISSA). Finally, a modified Q-learning algorithm based on simulated annealing is applied to determine the optimal strategy for the spoiler angle and plugging velocity in real time. The results show that the proposed method can reduce the plugging-induced vibration by 19.9% and 32.7% on average, compared with single-regulating methods. This study can effectively ensure the stability of the plugging process.
文摘Although modulation classification based on deep neural network can achieve high Modulation Classification(MC)accuracies,catastrophic forgetting will occur when the neural network model continues to learn new tasks.In this paper,we simulate the dynamic wireless communication environment and focus on breaking the learning paradigm of isolated automatic MC.We innovate a research algorithm for continuous automatic MC.Firstly,a memory for storing representative old task modulation signals is built,which is employed to limit the gradient update direction of new tasks in the continuous learning stage to ensure that the loss of old tasks is also in a downward trend.Secondly,in order to better simulate the dynamic wireless communication environment,we employ the mini-batch gradient algorithm which is more suitable for continuous learning.Finally,the signal in the memory can be replayed to further strengthen the characteristics of the old task signal in the model.Simulation results verify the effectiveness of the method.
文摘In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on training a shared machine learning model locally,eliminating the need for uploading raw data to a central server.It is perhaps the only training paradigm that preserves the privacy of user data,which is essential for computing environments as personal as the metaverse.However,the original FL architecture proposed is not scalable to a large number of user devices in the metaverse community.To mitigate this problem,hierarchical federated learning(HFL)has been introduced as a general distributed learning paradigm,inspiring a number of research works.In this paper,we present several types of HFL architectures,with a special focus on the three-layer client-edge-cloud HFL architecture,which is most pertinent to the metaverse due to its delay-sensitive nature.We also examine works that take advantage of the natural layered organization of three-layer client-edge-cloud HFL to tackle some of the most challenging problems in FL within the metaverse.Finally,we outline some future research directions of HFL in the metaverse.
基金National Natural Science Foundation of China(Nos.52275346 and 52075287)Tsinghua University Initiative Scientific Research Program(20221080070).
文摘The recent wave of the artificial intelligence(AI)revolution has aroused unprecedented interest in the intelligentialize of human society.As an essential component that bridges the physical world and digital signals,flexible sensors are evolving from a single sensing element to a smarter system,which is capable of highly efficient acquisition,analysis,and even perception of vast,multifaceted data.While challenging from a manual perspective,the development of intelligent flexible sensing has been remarkably facilitated owing to the rapid advances of brain-inspired AI innovations from both the algorithm(machine learning)and the framework(artificial synapses)level.This review presents the recent progress of the emerging AI-driven,intelligent flexible sensing systems.The basic concept of machine learning and artificial synapses are introduced.The new enabling features induced by the fusion of AI and flexible sensing are comprehensively reviewed,which significantly advances the applications such as flexible sensory systems,soft/humanoid robotics,and human activity monitoring.As two of the most profound innovations in the twenty-first century,the deep incorporation of flexible sensing and AI technology holds tremendous potential for creating a smarter world for human beings.
基金financially supported by the National Key Research and Development Program of China(Grant No.2022YFE0129800)the National Natural Science Foundation of China(Grant No.42202204)。
文摘In-situ upgrading by heating is feasible for low-maturity shale oil,where the pore space dynamically evolves.We characterize this response for a heated substrate concurrently imaged by SEM.We systematically follow the evolution of pore quantity,size(length,width and cross-sectional area),orientation,shape(aspect ratio,roundness and solidity)and their anisotropy—interpreted by machine learning.Results indicate that heating generates new pores in both organic matter and inorganic minerals.However,the newly formed pores are smaller than the original pores and thus reduce average lengths and widths of the bedding-parallel pore system.Conversely,the average pore lengths and widths are increased in the bedding-perpendicular direction.Besides,heating increases the cross-sectional area of pores in low-maturity oil shales,where this growth tendency fluctuates at<300℃ but becomes steady at>300℃.In addition,the orientation and shape of the newly-formed heating-induced pores follow the habit of the original pores and follow the initial probability distributions of pore orientation and shape.Herein,limited anisotropy is detected in pore direction and shape,indicating similar modes of evolution both bedding-parallel and bedding-normal.We propose a straightforward but robust model to describe evolution of pore system in low-maturity oil shales during heating.
基金supported in part by the National Key R&D Project of China under Grant 2020YFA0712300National Natural Science Foundation of China under Grant NSFC-62231022,12031011supported in part by the NSF of China under Grant 62125108。
文摘We consider an image semantic communication system in a time-varying fading Gaussian MIMO channel,with a finite number of channel states.A deep learning-aided broadcast approach scheme is proposed to benefit the adaptive semantic transmission in terms of different channel states.We combine the classic broadcast approach with the image transformer to implement this adaptive joint source and channel coding(JSCC)scheme.Specifically,we utilize the neural network(NN)to jointly optimize the hierarchical image compression and superposition code mapping within this scheme.The learned transformers and codebooks allow recovering of the image with an adaptive quality and low error rate at the receiver side,in each channel state.The simulation results exhibit our proposed scheme can dynamically adapt the coding to the current channel state and outperform some existing intelligent schemes with the fixed coding block.