Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str...The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.展开更多
The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational per...The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions.展开更多
Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Fi...Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Firstly,we choose two advanced unsupervised learning algorithms,namely,density-based spatial clustering of applications with noise(DBSCAN)and ordering points to identify the clustering structure(OPTICS),to explore the distinct phases of the Aubry–André–Harper model and the quasiperiodic p-wave model.The unsupervised learning results match well with those obtained through traditional numerical diagonalization.Finally,we assess similarity across different algorithms and find that the highest degree of similarity between the results of unsupervised learning algorithms and those of traditional algorithms exceeds 98%.Our work sheds light on applications of unsupervised learning for phase classification.展开更多
Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditiona...Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditional CSI feedback approaches face challenges such as performance degradation due to feedback delay and channel aging caused by user mobility.To address these issues,we propose a novel spatio-temporal predictive network(STPNet)that jointly integrates CSI feedback and prediction modules.STPNet employs stacked Inception modules to learn the spatial correlation and temporal evolution of CSI,which captures both the local and the global spatiotemporal features.In addition,the signal-to-noise ratio(SNR)adaptive module is designed to adapt flexibly to diverse feedback channel conditions.Simulation results demonstrate that STPNet outperforms existing channel prediction methods under various channel conditions.展开更多
BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early pre...BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early prediction model for identifying high-risk patients in EDs using initial vital sign measurements.METHODS:This retrospective cohort study analyzed initial vital signs from the Chinese Emergency Triage,Assessment,and Treatment(CETAT)database,which was collected between January 1^(st),2020,and June 25^(th),2023.The primary outcome was the identification of high-risk patients needing immediate treatment.Various machine learning methods,including a deep-learningbased multilayer perceptron(MLP)classifier were evaluated.Model performance was assessed using the area under the receiver operating characteristic curve(AUC-ROC).AUC-ROC values were reported for three scenarios:a default case,a scenario requiring sensitivity greater than 0.8(Scenario I),and a scenario requiring specificity greater than 0.8(Scenario II).SHAP values were calculated to determine the importance of each predictor within the MLP model.RESULTS:A total of 38,797 patients were analyzed,of whom 18.2%were identified as high-risk.Comparative analysis of the predictive models for high-risk patients showed AUC-ROC values ranging from 0.717 to 0.738,with the MLP model outperforming logistic regression(LR),Gaussian Naive Bayes(GNB),and the National Early Warning Score(NEWS).SHAP value analysis identified coma state,peripheral capillary oxygen saturation(SpO_(2)),and systolic blood pressure as the top three predictive factors in the MLP model,with coma state exerting the most contribution.CONCLUSION:Compared with other methods,the MLP model with initial vital signs demonstrated optimal prediction accuracy,highlighting its potential to enhance clinical decision-making in triage in the EDs.展开更多
Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal ro...Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal role in nonlinear science,serving as a critical tool for revealing the underlying principles governing these systems.In addition,they play a crucial role in accelerating progress across various fields,such as climate modeling,weather forecasting,and fluid dynamics.However,their high computational cost limits their application in high-precision or long-duration simulations.In this study,we propose a novel data-driven approach for simulating complex physical systems,particularly turbulent phenomena.Specifically,we develop an efficient surrogate model based on the wavelet neural operator(WNO).Experimental results demonstrate that the enhanced WNO model can accurately simulate small-scale turbulent flows while using lower computational costs.In simulations of complex physical fields,the improved WNO model outperforms established deep learning models,such as U-Net,Res Net,and the Fourier neural operator(FNO),in terms of accuracy.Notably,the improved WNO model exhibits exceptional generalization capabilities,maintaining stable performance across a wide range of initial conditions and high-resolution scenarios without retraining.This study highlights the significant potential of the enhanced WNO model for simulating complex physical systems,providing strong evidence to support the development of more efficient,scalable,and high-precision simulation techniques.展开更多
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche...Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.展开更多
Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, ex...Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, expensive, and time-consuming. Machine learning(ML) techniques enable the proficient extraction of information from datasets, allowing the development of realistic models that are capable of predicting the efficacy of materials with commendable accuracy. The PM6 donor has great potential for high-performance OSCs. However, it is crucial for the rational design of a ternary blend to accurately forecast the power conversion efficiency(PCE) of ternary OSCs(TOSCs) based on a PM6 donor.Accordingly, we collected the device parameters of PM6-based TOSCs and evaluated the feature importance of their molecule descriptors to develop predictive models. In this study, we used five different ML algorithms for analysis and prediction. For the analysis, the classification and regression tree provided different rules, heuristics, and patterns from the heterogeneous dataset. The random forest algorithm outperforms other prediction ML algorithms in predicting the output performance of PM6-based TOSCs. Finally, we validated the ML outcomes by fabricating PM6-based TOSCs. Our study presents a rapid strategy for assessing a high PCE while elucidating the substantial influence of diverse descriptors.展开更多
Perovskite solar cells(PSCs)have developed rapidly,positioning them as potential candidates for nextgeneration renewable energy sources.However,conventional trial-and-error approaches and the vast compositional parame...Perovskite solar cells(PSCs)have developed rapidly,positioning them as potential candidates for nextgeneration renewable energy sources.However,conventional trial-and-error approaches and the vast compositional parameter space continue to pose challenges in the pursuit of exceptional performance and high stability of perovskite-based optoelectronics.The increasing demand for novel materials in optoelectronic devices and establishment of substantial databases has enabled data-driven machinelearning(ML)approaches to swiftly advance in the materials field.This review succinctly outlines the fundamental ML procedures,techniques,and recent breakthroughs,particularly in predicting the physical characteristics of perovskite materials.Moreover,it highlights research endeavors aimed at optimizing and screening materials to enhance the efficiency and stability of PSCs.Additionally,this review highlights recent efforts in using characterization data for ML,exploring their correlations with material properties and device performance,which are actively being researched,but they have yet to receive significant attention.Lastly,we provide future perspectives,such as leveraging Large Language Models(LLMs)and text-mining,to expedite the discovery of novel perovskite materials and expand their utilization across various optoelectronic fields.展开更多
Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlo...Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices.展开更多
LiFePO_(4) is a cathode material with good thermal stability,but low thermal conductivity is a critical problem.In this study,we employ a machine learning potential approach based on first-principles methods combined ...LiFePO_(4) is a cathode material with good thermal stability,but low thermal conductivity is a critical problem.In this study,we employ a machine learning potential approach based on first-principles methods combined with the Boltzmann transport theory to investigate the influence of Na substitution on the thermal conductivity of LiFePO_(4) and the impact of Li-ion de-embedding on the thermal conductivity of Li_(3/4)Na_(1/4)FePO_(4),with the aim of enhancing heat dissipation in Li-ion batteries.The results show a significant increase in thermal conductivity due to an increase in phonon group velocity and a decrease in phonon anharmonic scattering by Na substitution.In addition,the thermal conductivity increases significantly with decreasing Li-ion concentration due to the increase in phonon lifetime.Our work guides the improvement of the thermal conductivity of Li FePO_4,emphasizing the crucial roles of both substitution and Li-ion detachment/intercalation for the thermal management of electrochemical energy storage devices.展开更多
In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated ...In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications.展开更多
Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-at...Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-atom catalysts (SACs) on two-dimensional (2D) materials,are essential.This study presents a few-shot machine learning (ML) assisted high-throughput screening of 2D septuple-atomic-layer Ga_(2)CoS_(4-x)supported SACs to predict HER catalytic activity.Initially,density functional theory (DFT)calculations showed that 2D Ga_(2)CoS4is inactive for HER.However,defective Ga_(2)CoS_(4-x)(x=0–0.25)monolayers exhibit excellent HER activity due to surface sulfur vacancies (SVs),with predicted overpotentials (0–60 mV) comparable to or lower than commercial Pt/C,which typically exhibits an overpotential of around 50 m V in the acidic electrolyte,when the concentration of surface SV is lower than 8.3%.SVs generate spin-polarized states near the Fermi level,making them effective HER sites.We demonstrate ML-accelerated HER overpotential predictions for all transition metal SACs on 2D Ga_(2)CoS_(4-x).Using DFT data from 18 SACs,an ML model with high prediction accuracy and reduced computation time was developed.An intrinsic descriptor linking SAC atomic properties to HER overpotential was identified.This study thus provides a framework for screening SACs on 2D materials,enhancing catalyst design.展开更多
This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards grea...This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation.展开更多
In this paper,we propose a sub-6GHz channel assisted hybrid beamforming(HBF)for mmWave system under both line-of-sight(LOS)and non-line-of-sight(NLOS)scenarios without mmWave channel estimation.Meanwhile,we resort to ...In this paper,we propose a sub-6GHz channel assisted hybrid beamforming(HBF)for mmWave system under both line-of-sight(LOS)and non-line-of-sight(NLOS)scenarios without mmWave channel estimation.Meanwhile,we resort to the selfsupervised approach to eliminate the need for labels,thus avoiding the accompanied high cost of data collection and annotation.We first construct the dense connection network(DCnet)with three modules:the feature extraction module for extracting channel characteristic from a large amount of channel data,the feature fusion module for combining multidimensional features,and the prediction module for generating the HBF matrices.Next,we establish a lightweight network architecture,named as LDnet,to reduce the number of model parameters and computational complexity.The proposed sub-6GHz assisted approach eliminates mmWave pilot resources compared to the method using mmWave channel information directly.The simulation results indicate that the proposed DCnet and LDnet can achieve the spectral efficiency that is superior to the traditional orthogonal matching pursuit(OMP)algorithm by 13.66% and 10.44% under LOS scenarios and by 32.35% and 27.75% under NLOS scenarios,respectively.Moreover,the LDnet achieves 98.52% reduction in the number of model parameters and 22.93% reduction in computational complexity compared to DCnet.展开更多
Accurately estimating protein–ligand binding free energy is crucial for drug design and biophysics, yet remains a challenging task. In this study, we applied the screening molecular mechanics/Poisson–Boltzmann surfa...Accurately estimating protein–ligand binding free energy is crucial for drug design and biophysics, yet remains a challenging task. In this study, we applied the screening molecular mechanics/Poisson–Boltzmann surface area(MM/PBSA)method in combination with various machine learning techniques to compute the binding free energies of protein–ligand interactions. Our results demonstrate that machine learning outperforms direct screening MM/PBSA calculations in predicting protein–ligand binding free energies. Notably, the random forest(RF) method exhibited the best predictive performance,with a Pearson correlation coefficient(rp) of 0.702 and a mean absolute error(MAE) of 1.379 kcal/mol. Furthermore, we analyzed feature importance rankings in the gradient boosting(GB), adaptive boosting(Ada Boost), and RF methods, and found that feature selection significantly impacted predictive performance. In particular, molecular weight(MW) and van der Waals(VDW) energies played a decisive role in the prediction. Overall, this study highlights the potential of combining machine learning methods with screening MM/PBSA for accurately predicting binding free energies in biosystems.展开更多
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金financial support from the National Key Research and Development Program of China(2021YFB 3501501)the National Natural Science Foundation of China(No.22225803,22038001,22108007 and 22278011)+1 种基金Beijing Natural Science Foundation(No.Z230023)Beijing Science and Technology Commission(No.Z211100004321001).
文摘The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.
基金National Natural Science Foundation of China (52075420)Fundamental Research Funds for the Central Universities (xzy022023049)National Key Research and Development Program of China (2023YFB3408600)。
文摘The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions.
基金Project supported by the Natural Science Foundation of Nanjing University of Posts and Telecommunications(Grant Nos.NY223109,NY220119,and NY221055)China Postdoctoral Science Foundation(Grant No.2022M721693)the National Natural Science Foundation of China(Grant No.12404365)。
文摘Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Firstly,we choose two advanced unsupervised learning algorithms,namely,density-based spatial clustering of applications with noise(DBSCAN)and ordering points to identify the clustering structure(OPTICS),to explore the distinct phases of the Aubry–André–Harper model and the quasiperiodic p-wave model.The unsupervised learning results match well with those obtained through traditional numerical diagonalization.Finally,we assess similarity across different algorithms and find that the highest degree of similarity between the results of unsupervised learning algorithms and those of traditional algorithms exceeds 98%.Our work sheds light on applications of unsupervised learning for phase classification.
基金supported in part by the Natural Science Foundation of China under Grant Nos.U2468201 and 62221001ZTE Industry-University-Institute Cooperation Funds under Grant No.IA20240420002。
文摘Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditional CSI feedback approaches face challenges such as performance degradation due to feedback delay and channel aging caused by user mobility.To address these issues,we propose a novel spatio-temporal predictive network(STPNet)that jointly integrates CSI feedback and prediction modules.STPNet employs stacked Inception modules to learn the spatial correlation and temporal evolution of CSI,which captures both the local and the global spatiotemporal features.In addition,the signal-to-noise ratio(SNR)adaptive module is designed to adapt flexibly to diverse feedback channel conditions.Simulation results demonstrate that STPNet outperforms existing channel prediction methods under various channel conditions.
基金Applicable Funding Source University of Science and Technology of China(to YLL)National Natural Science Foundation of China(12126604)(to MPZ)+1 种基金R&D project of Pazhou Lab(Huangpu)(2023K0609)(to MPZ)Anhui Provincial Natural Science(grant number 2208085MH235)(to KJ)。
文摘BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early prediction model for identifying high-risk patients in EDs using initial vital sign measurements.METHODS:This retrospective cohort study analyzed initial vital signs from the Chinese Emergency Triage,Assessment,and Treatment(CETAT)database,which was collected between January 1^(st),2020,and June 25^(th),2023.The primary outcome was the identification of high-risk patients needing immediate treatment.Various machine learning methods,including a deep-learningbased multilayer perceptron(MLP)classifier were evaluated.Model performance was assessed using the area under the receiver operating characteristic curve(AUC-ROC).AUC-ROC values were reported for three scenarios:a default case,a scenario requiring sensitivity greater than 0.8(Scenario I),and a scenario requiring specificity greater than 0.8(Scenario II).SHAP values were calculated to determine the importance of each predictor within the MLP model.RESULTS:A total of 38,797 patients were analyzed,of whom 18.2%were identified as high-risk.Comparative analysis of the predictive models for high-risk patients showed AUC-ROC values ranging from 0.717 to 0.738,with the MLP model outperforming logistic regression(LR),Gaussian Naive Bayes(GNB),and the National Early Warning Score(NEWS).SHAP value analysis identified coma state,peripheral capillary oxygen saturation(SpO_(2)),and systolic blood pressure as the top three predictive factors in the MLP model,with coma state exerting the most contribution.CONCLUSION:Compared with other methods,the MLP model with initial vital signs demonstrated optimal prediction accuracy,highlighting its potential to enhance clinical decision-making in triage in the EDs.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.42005003 and 41475094)。
文摘Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal role in nonlinear science,serving as a critical tool for revealing the underlying principles governing these systems.In addition,they play a crucial role in accelerating progress across various fields,such as climate modeling,weather forecasting,and fluid dynamics.However,their high computational cost limits their application in high-precision or long-duration simulations.In this study,we propose a novel data-driven approach for simulating complex physical systems,particularly turbulent phenomena.Specifically,we develop an efficient surrogate model based on the wavelet neural operator(WNO).Experimental results demonstrate that the enhanced WNO model can accurately simulate small-scale turbulent flows while using lower computational costs.In simulations of complex physical fields,the improved WNO model outperforms established deep learning models,such as U-Net,Res Net,and the Fourier neural operator(FNO),in terms of accuracy.Notably,the improved WNO model exhibits exceptional generalization capabilities,maintaining stable performance across a wide range of initial conditions and high-resolution scenarios without retraining.This study highlights the significant potential of the enhanced WNO model for simulating complex physical systems,providing strong evidence to support the development of more efficient,scalable,and high-precision simulation techniques.
文摘Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.
基金National Research Foundation of Korea (NRF) grant (No. 2016R1A3B 1908249) funded by the Korean government。
文摘Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, expensive, and time-consuming. Machine learning(ML) techniques enable the proficient extraction of information from datasets, allowing the development of realistic models that are capable of predicting the efficacy of materials with commendable accuracy. The PM6 donor has great potential for high-performance OSCs. However, it is crucial for the rational design of a ternary blend to accurately forecast the power conversion efficiency(PCE) of ternary OSCs(TOSCs) based on a PM6 donor.Accordingly, we collected the device parameters of PM6-based TOSCs and evaluated the feature importance of their molecule descriptors to develop predictive models. In this study, we used five different ML algorithms for analysis and prediction. For the analysis, the classification and regression tree provided different rules, heuristics, and patterns from the heterogeneous dataset. The random forest algorithm outperforms other prediction ML algorithms in predicting the output performance of PM6-based TOSCs. Finally, we validated the ML outcomes by fabricating PM6-based TOSCs. Our study presents a rapid strategy for assessing a high PCE while elucidating the substantial influence of diverse descriptors.
基金supported by the Ministry of Science and ICT(MSIT)of the Republic of Korea(00302646)supported by the National Research Foundation of Korea grant funded by the Korean Government(MSIT)(NRF-2022R1A4A1019296,1345374646,2022M3J1A1064315).
文摘Perovskite solar cells(PSCs)have developed rapidly,positioning them as potential candidates for nextgeneration renewable energy sources.However,conventional trial-and-error approaches and the vast compositional parameter space continue to pose challenges in the pursuit of exceptional performance and high stability of perovskite-based optoelectronics.The increasing demand for novel materials in optoelectronic devices and establishment of substantial databases has enabled data-driven machinelearning(ML)approaches to swiftly advance in the materials field.This review succinctly outlines the fundamental ML procedures,techniques,and recent breakthroughs,particularly in predicting the physical characteristics of perovskite materials.Moreover,it highlights research endeavors aimed at optimizing and screening materials to enhance the efficiency and stability of PSCs.Additionally,this review highlights recent efforts in using characterization data for ML,exploring their correlations with material properties and device performance,which are actively being researched,but they have yet to receive significant attention.Lastly,we provide future perspectives,such as leveraging Large Language Models(LLMs)and text-mining,to expedite the discovery of novel perovskite materials and expand their utilization across various optoelectronic fields.
基金supported by the National Natural Science Foundation of China(No.52207229)the Key Research and Development Program of Ningxia Hui Autonomous Region of China(No.2024BEE02003)+1 种基金the financial support from the AEGiS Research Grant 2024,University of Wollongong(No.R6254)the financial support from the China Scholarship Council(No.202207550010).
文摘Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices.
基金supported by the National Natural Science Foundation of China(Grant No.12074115)the Science and Technology Innovation Program of Hunan Province(Grant No.2023RC3176)。
文摘LiFePO_(4) is a cathode material with good thermal stability,but low thermal conductivity is a critical problem.In this study,we employ a machine learning potential approach based on first-principles methods combined with the Boltzmann transport theory to investigate the influence of Na substitution on the thermal conductivity of LiFePO_(4) and the impact of Li-ion de-embedding on the thermal conductivity of Li_(3/4)Na_(1/4)FePO_(4),with the aim of enhancing heat dissipation in Li-ion batteries.The results show a significant increase in thermal conductivity due to an increase in phonon group velocity and a decrease in phonon anharmonic scattering by Na substitution.In addition,the thermal conductivity increases significantly with decreasing Li-ion concentration due to the increase in phonon lifetime.Our work guides the improvement of the thermal conductivity of Li FePO_4,emphasizing the crucial roles of both substitution and Li-ion detachment/intercalation for the thermal management of electrochemical energy storage devices.
基金supported by the National Natural Science Foundation of China(No.12005198).
文摘In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications.
文摘Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-atom catalysts (SACs) on two-dimensional (2D) materials,are essential.This study presents a few-shot machine learning (ML) assisted high-throughput screening of 2D septuple-atomic-layer Ga_(2)CoS_(4-x)supported SACs to predict HER catalytic activity.Initially,density functional theory (DFT)calculations showed that 2D Ga_(2)CoS4is inactive for HER.However,defective Ga_(2)CoS_(4-x)(x=0–0.25)monolayers exhibit excellent HER activity due to surface sulfur vacancies (SVs),with predicted overpotentials (0–60 mV) comparable to or lower than commercial Pt/C,which typically exhibits an overpotential of around 50 m V in the acidic electrolyte,when the concentration of surface SV is lower than 8.3%.SVs generate spin-polarized states near the Fermi level,making them effective HER sites.We demonstrate ML-accelerated HER overpotential predictions for all transition metal SACs on 2D Ga_(2)CoS_(4-x).Using DFT data from 18 SACs,an ML model with high prediction accuracy and reduced computation time was developed.An intrinsic descriptor linking SAC atomic properties to HER overpotential was identified.This study thus provides a framework for screening SACs on 2D materials,enhancing catalyst design.
基金supported by the Science and Technology Project of State Grid Sichuan Electric Power Company Chengdu Power Supply Company under Grant No.521904240005.
文摘This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation.
基金supported in part by the National Natural Science Foundation of China under Grants 62325107,62341107,62261160650,and U23A20272in part by the Beijing Natural Science Foundation under Grant L222002.
文摘In this paper,we propose a sub-6GHz channel assisted hybrid beamforming(HBF)for mmWave system under both line-of-sight(LOS)and non-line-of-sight(NLOS)scenarios without mmWave channel estimation.Meanwhile,we resort to the selfsupervised approach to eliminate the need for labels,thus avoiding the accompanied high cost of data collection and annotation.We first construct the dense connection network(DCnet)with three modules:the feature extraction module for extracting channel characteristic from a large amount of channel data,the feature fusion module for combining multidimensional features,and the prediction module for generating the HBF matrices.Next,we establish a lightweight network architecture,named as LDnet,to reduce the number of model parameters and computational complexity.The proposed sub-6GHz assisted approach eliminates mmWave pilot resources compared to the method using mmWave channel information directly.The simulation results indicate that the proposed DCnet and LDnet can achieve the spectral efficiency that is superior to the traditional orthogonal matching pursuit(OMP)algorithm by 13.66% and 10.44% under LOS scenarios and by 32.35% and 27.75% under NLOS scenarios,respectively.Moreover,the LDnet achieves 98.52% reduction in the number of model parameters and 22.93% reduction in computational complexity compared to DCnet.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 12222506, 12347102, 12447164, and 12174184)。
文摘Accurately estimating protein–ligand binding free energy is crucial for drug design and biophysics, yet remains a challenging task. In this study, we applied the screening molecular mechanics/Poisson–Boltzmann surface area(MM/PBSA)method in combination with various machine learning techniques to compute the binding free energies of protein–ligand interactions. Our results demonstrate that machine learning outperforms direct screening MM/PBSA calculations in predicting protein–ligand binding free energies. Notably, the random forest(RF) method exhibited the best predictive performance,with a Pearson correlation coefficient(rp) of 0.702 and a mean absolute error(MAE) of 1.379 kcal/mol. Furthermore, we analyzed feature importance rankings in the gradient boosting(GB), adaptive boosting(Ada Boost), and RF methods, and found that feature selection significantly impacted predictive performance. In particular, molecular weight(MW) and van der Waals(VDW) energies played a decisive role in the prediction. Overall, this study highlights the potential of combining machine learning methods with screening MM/PBSA for accurately predicting binding free energies in biosystems.