Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str...The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.展开更多
The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational per...The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions.展开更多
Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Fi...Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Firstly,we choose two advanced unsupervised learning algorithms,namely,density-based spatial clustering of applications with noise(DBSCAN)and ordering points to identify the clustering structure(OPTICS),to explore the distinct phases of the Aubry–André–Harper model and the quasiperiodic p-wave model.The unsupervised learning results match well with those obtained through traditional numerical diagonalization.Finally,we assess similarity across different algorithms and find that the highest degree of similarity between the results of unsupervised learning algorithms and those of traditional algorithms exceeds 98%.Our work sheds light on applications of unsupervised learning for phase classification.展开更多
Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditiona...Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditional CSI feedback approaches face challenges such as performance degradation due to feedback delay and channel aging caused by user mobility.To address these issues,we propose a novel spatio-temporal predictive network(STPNet)that jointly integrates CSI feedback and prediction modules.STPNet employs stacked Inception modules to learn the spatial correlation and temporal evolution of CSI,which captures both the local and the global spatiotemporal features.In addition,the signal-to-noise ratio(SNR)adaptive module is designed to adapt flexibly to diverse feedback channel conditions.Simulation results demonstrate that STPNet outperforms existing channel prediction methods under various channel conditions.展开更多
Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal ro...Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal role in nonlinear science,serving as a critical tool for revealing the underlying principles governing these systems.In addition,they play a crucial role in accelerating progress across various fields,such as climate modeling,weather forecasting,and fluid dynamics.However,their high computational cost limits their application in high-precision or long-duration simulations.In this study,we propose a novel data-driven approach for simulating complex physical systems,particularly turbulent phenomena.Specifically,we develop an efficient surrogate model based on the wavelet neural operator(WNO).Experimental results demonstrate that the enhanced WNO model can accurately simulate small-scale turbulent flows while using lower computational costs.In simulations of complex physical fields,the improved WNO model outperforms established deep learning models,such as U-Net,Res Net,and the Fourier neural operator(FNO),in terms of accuracy.Notably,the improved WNO model exhibits exceptional generalization capabilities,maintaining stable performance across a wide range of initial conditions and high-resolution scenarios without retraining.This study highlights the significant potential of the enhanced WNO model for simulating complex physical systems,providing strong evidence to support the development of more efficient,scalable,and high-precision simulation techniques.展开更多
BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early pre...BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early prediction model for identifying high-risk patients in EDs using initial vital sign measurements.METHODS:This retrospective cohort study analyzed initial vital signs from the Chinese Emergency Triage,Assessment,and Treatment(CETAT)database,which was collected between January 1^(st),2020,and June 25^(th),2023.The primary outcome was the identification of high-risk patients needing immediate treatment.Various machine learning methods,including a deep-learningbased multilayer perceptron(MLP)classifier were evaluated.Model performance was assessed using the area under the receiver operating characteristic curve(AUC-ROC).AUC-ROC values were reported for three scenarios:a default case,a scenario requiring sensitivity greater than 0.8(Scenario I),and a scenario requiring specificity greater than 0.8(Scenario II).SHAP values were calculated to determine the importance of each predictor within the MLP model.RESULTS:A total of 38,797 patients were analyzed,of whom 18.2%were identified as high-risk.Comparative analysis of the predictive models for high-risk patients showed AUC-ROC values ranging from 0.717 to 0.738,with the MLP model outperforming logistic regression(LR),Gaussian Naive Bayes(GNB),and the National Early Warning Score(NEWS).SHAP value analysis identified coma state,peripheral capillary oxygen saturation(SpO_(2)),and systolic blood pressure as the top three predictive factors in the MLP model,with coma state exerting the most contribution.CONCLUSION:Compared with other methods,the MLP model with initial vital signs demonstrated optimal prediction accuracy,highlighting its potential to enhance clinical decision-making in triage in the EDs.展开更多
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche...Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.展开更多
Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, ex...Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, expensive, and time-consuming. Machine learning(ML) techniques enable the proficient extraction of information from datasets, allowing the development of realistic models that are capable of predicting the efficacy of materials with commendable accuracy. The PM6 donor has great potential for high-performance OSCs. However, it is crucial for the rational design of a ternary blend to accurately forecast the power conversion efficiency(PCE) of ternary OSCs(TOSCs) based on a PM6 donor.Accordingly, we collected the device parameters of PM6-based TOSCs and evaluated the feature importance of their molecule descriptors to develop predictive models. In this study, we used five different ML algorithms for analysis and prediction. For the analysis, the classification and regression tree provided different rules, heuristics, and patterns from the heterogeneous dataset. The random forest algorithm outperforms other prediction ML algorithms in predicting the output performance of PM6-based TOSCs. Finally, we validated the ML outcomes by fabricating PM6-based TOSCs. Our study presents a rapid strategy for assessing a high PCE while elucidating the substantial influence of diverse descriptors.展开更多
In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated ...In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications.展开更多
Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-at...Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-atom catalysts (SACs) on two-dimensional (2D) materials,are essential.This study presents a few-shot machine learning (ML) assisted high-throughput screening of 2D septuple-atomic-layer Ga_(2)CoS_(4-x)supported SACs to predict HER catalytic activity.Initially,density functional theory (DFT)calculations showed that 2D Ga_(2)CoS4is inactive for HER.However,defective Ga_(2)CoS_(4-x)(x=0–0.25)monolayers exhibit excellent HER activity due to surface sulfur vacancies (SVs),with predicted overpotentials (0–60 mV) comparable to or lower than commercial Pt/C,which typically exhibits an overpotential of around 50 m V in the acidic electrolyte,when the concentration of surface SV is lower than 8.3%.SVs generate spin-polarized states near the Fermi level,making them effective HER sites.We demonstrate ML-accelerated HER overpotential predictions for all transition metal SACs on 2D Ga_(2)CoS_(4-x).Using DFT data from 18 SACs,an ML model with high prediction accuracy and reduced computation time was developed.An intrinsic descriptor linking SAC atomic properties to HER overpotential was identified.This study thus provides a framework for screening SACs on 2D materials,enhancing catalyst design.展开更多
This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards grea...This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation.展开更多
In this paper,we propose a sub-6GHz channel assisted hybrid beamforming(HBF)for mmWave system under both line-of-sight(LOS)and non-line-of-sight(NLOS)scenarios without mmWave channel estimation.Meanwhile,we resort to ...In this paper,we propose a sub-6GHz channel assisted hybrid beamforming(HBF)for mmWave system under both line-of-sight(LOS)and non-line-of-sight(NLOS)scenarios without mmWave channel estimation.Meanwhile,we resort to the selfsupervised approach to eliminate the need for labels,thus avoiding the accompanied high cost of data collection and annotation.We first construct the dense connection network(DCnet)with three modules:the feature extraction module for extracting channel characteristic from a large amount of channel data,the feature fusion module for combining multidimensional features,and the prediction module for generating the HBF matrices.Next,we establish a lightweight network architecture,named as LDnet,to reduce the number of model parameters and computational complexity.The proposed sub-6GHz assisted approach eliminates mmWave pilot resources compared to the method using mmWave channel information directly.The simulation results indicate that the proposed DCnet and LDnet can achieve the spectral efficiency that is superior to the traditional orthogonal matching pursuit(OMP)algorithm by 13.66% and 10.44% under LOS scenarios and by 32.35% and 27.75% under NLOS scenarios,respectively.Moreover,the LDnet achieves 98.52% reduction in the number of model parameters and 22.93% reduction in computational complexity compared to DCnet.展开更多
Accurately estimating protein–ligand binding free energy is crucial for drug design and biophysics, yet remains a challenging task. In this study, we applied the screening molecular mechanics/Poisson–Boltzmann surfa...Accurately estimating protein–ligand binding free energy is crucial for drug design and biophysics, yet remains a challenging task. In this study, we applied the screening molecular mechanics/Poisson–Boltzmann surface area(MM/PBSA)method in combination with various machine learning techniques to compute the binding free energies of protein–ligand interactions. Our results demonstrate that machine learning outperforms direct screening MM/PBSA calculations in predicting protein–ligand binding free energies. Notably, the random forest(RF) method exhibited the best predictive performance,with a Pearson correlation coefficient(rp) of 0.702 and a mean absolute error(MAE) of 1.379 kcal/mol. Furthermore, we analyzed feature importance rankings in the gradient boosting(GB), adaptive boosting(Ada Boost), and RF methods, and found that feature selection significantly impacted predictive performance. In particular, molecular weight(MW) and van der Waals(VDW) energies played a decisive role in the prediction. Overall, this study highlights the potential of combining machine learning methods with screening MM/PBSA for accurately predicting binding free energies in biosystems.展开更多
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode...Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.展开更多
The recent wave of the artificial intelligence(AI)revolution has aroused unprecedented interest in the intelligentialize of human society.As an essential component that bridges the physical world and digital signals,f...The recent wave of the artificial intelligence(AI)revolution has aroused unprecedented interest in the intelligentialize of human society.As an essential component that bridges the physical world and digital signals,flexible sensors are evolving from a single sensing element to a smarter system,which is capable of highly efficient acquisition,analysis,and even perception of vast,multifaceted data.While challenging from a manual perspective,the development of intelligent flexible sensing has been remarkably facilitated owing to the rapid advances of brain-inspired AI innovations from both the algorithm(machine learning)and the framework(artificial synapses)level.This review presents the recent progress of the emerging AI-driven,intelligent flexible sensing systems.The basic concept of machine learning and artificial synapses are introduced.The new enabling features induced by the fusion of AI and flexible sensing are comprehensively reviewed,which significantly advances the applications such as flexible sensory systems,soft/humanoid robotics,and human activity monitoring.As two of the most profound innovations in the twenty-first century,the deep incorporation of flexible sensing and AI technology holds tremendous potential for creating a smarter world for human beings.展开更多
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金financial support from the National Key Research and Development Program of China(2021YFB 3501501)the National Natural Science Foundation of China(No.22225803,22038001,22108007 and 22278011)+1 种基金Beijing Natural Science Foundation(No.Z230023)Beijing Science and Technology Commission(No.Z211100004321001).
文摘The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction.
基金National Natural Science Foundation of China (52075420)Fundamental Research Funds for the Central Universities (xzy022023049)National Key Research and Development Program of China (2023YFB3408600)。
文摘The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions.
基金Project supported by the Natural Science Foundation of Nanjing University of Posts and Telecommunications(Grant Nos.NY223109,NY220119,and NY221055)China Postdoctoral Science Foundation(Grant No.2022M721693)the National Natural Science Foundation of China(Grant No.12404365)。
文摘Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Firstly,we choose two advanced unsupervised learning algorithms,namely,density-based spatial clustering of applications with noise(DBSCAN)and ordering points to identify the clustering structure(OPTICS),to explore the distinct phases of the Aubry–André–Harper model and the quasiperiodic p-wave model.The unsupervised learning results match well with those obtained through traditional numerical diagonalization.Finally,we assess similarity across different algorithms and find that the highest degree of similarity between the results of unsupervised learning algorithms and those of traditional algorithms exceeds 98%.Our work sheds light on applications of unsupervised learning for phase classification.
基金supported in part by the Natural Science Foundation of China under Grant Nos.U2468201 and 62221001ZTE Industry-University-Institute Cooperation Funds under Grant No.IA20240420002。
文摘Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditional CSI feedback approaches face challenges such as performance degradation due to feedback delay and channel aging caused by user mobility.To address these issues,we propose a novel spatio-temporal predictive network(STPNet)that jointly integrates CSI feedback and prediction modules.STPNet employs stacked Inception modules to learn the spatial correlation and temporal evolution of CSI,which captures both the local and the global spatiotemporal features.In addition,the signal-to-noise ratio(SNR)adaptive module is designed to adapt flexibly to diverse feedback channel conditions.Simulation results demonstrate that STPNet outperforms existing channel prediction methods under various channel conditions.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.42005003 and 41475094)。
文摘Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal role in nonlinear science,serving as a critical tool for revealing the underlying principles governing these systems.In addition,they play a crucial role in accelerating progress across various fields,such as climate modeling,weather forecasting,and fluid dynamics.However,their high computational cost limits their application in high-precision or long-duration simulations.In this study,we propose a novel data-driven approach for simulating complex physical systems,particularly turbulent phenomena.Specifically,we develop an efficient surrogate model based on the wavelet neural operator(WNO).Experimental results demonstrate that the enhanced WNO model can accurately simulate small-scale turbulent flows while using lower computational costs.In simulations of complex physical fields,the improved WNO model outperforms established deep learning models,such as U-Net,Res Net,and the Fourier neural operator(FNO),in terms of accuracy.Notably,the improved WNO model exhibits exceptional generalization capabilities,maintaining stable performance across a wide range of initial conditions and high-resolution scenarios without retraining.This study highlights the significant potential of the enhanced WNO model for simulating complex physical systems,providing strong evidence to support the development of more efficient,scalable,and high-precision simulation techniques.
基金Applicable Funding Source University of Science and Technology of China(to YLL)National Natural Science Foundation of China(12126604)(to MPZ)+1 种基金R&D project of Pazhou Lab(Huangpu)(2023K0609)(to MPZ)Anhui Provincial Natural Science(grant number 2208085MH235)(to KJ)。
文摘BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early prediction model for identifying high-risk patients in EDs using initial vital sign measurements.METHODS:This retrospective cohort study analyzed initial vital signs from the Chinese Emergency Triage,Assessment,and Treatment(CETAT)database,which was collected between January 1^(st),2020,and June 25^(th),2023.The primary outcome was the identification of high-risk patients needing immediate treatment.Various machine learning methods,including a deep-learningbased multilayer perceptron(MLP)classifier were evaluated.Model performance was assessed using the area under the receiver operating characteristic curve(AUC-ROC).AUC-ROC values were reported for three scenarios:a default case,a scenario requiring sensitivity greater than 0.8(Scenario I),and a scenario requiring specificity greater than 0.8(Scenario II).SHAP values were calculated to determine the importance of each predictor within the MLP model.RESULTS:A total of 38,797 patients were analyzed,of whom 18.2%were identified as high-risk.Comparative analysis of the predictive models for high-risk patients showed AUC-ROC values ranging from 0.717 to 0.738,with the MLP model outperforming logistic regression(LR),Gaussian Naive Bayes(GNB),and the National Early Warning Score(NEWS).SHAP value analysis identified coma state,peripheral capillary oxygen saturation(SpO_(2)),and systolic blood pressure as the top three predictive factors in the MLP model,with coma state exerting the most contribution.CONCLUSION:Compared with other methods,the MLP model with initial vital signs demonstrated optimal prediction accuracy,highlighting its potential to enhance clinical decision-making in triage in the EDs.
文摘Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme.
基金National Research Foundation of Korea (NRF) grant (No. 2016R1A3B 1908249) funded by the Korean government。
文摘Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, expensive, and time-consuming. Machine learning(ML) techniques enable the proficient extraction of information from datasets, allowing the development of realistic models that are capable of predicting the efficacy of materials with commendable accuracy. The PM6 donor has great potential for high-performance OSCs. However, it is crucial for the rational design of a ternary blend to accurately forecast the power conversion efficiency(PCE) of ternary OSCs(TOSCs) based on a PM6 donor.Accordingly, we collected the device parameters of PM6-based TOSCs and evaluated the feature importance of their molecule descriptors to develop predictive models. In this study, we used five different ML algorithms for analysis and prediction. For the analysis, the classification and regression tree provided different rules, heuristics, and patterns from the heterogeneous dataset. The random forest algorithm outperforms other prediction ML algorithms in predicting the output performance of PM6-based TOSCs. Finally, we validated the ML outcomes by fabricating PM6-based TOSCs. Our study presents a rapid strategy for assessing a high PCE while elucidating the substantial influence of diverse descriptors.
基金supported by the National Natural Science Foundation of China(No.12005198).
文摘In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications.
文摘Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-atom catalysts (SACs) on two-dimensional (2D) materials,are essential.This study presents a few-shot machine learning (ML) assisted high-throughput screening of 2D septuple-atomic-layer Ga_(2)CoS_(4-x)supported SACs to predict HER catalytic activity.Initially,density functional theory (DFT)calculations showed that 2D Ga_(2)CoS4is inactive for HER.However,defective Ga_(2)CoS_(4-x)(x=0–0.25)monolayers exhibit excellent HER activity due to surface sulfur vacancies (SVs),with predicted overpotentials (0–60 mV) comparable to or lower than commercial Pt/C,which typically exhibits an overpotential of around 50 m V in the acidic electrolyte,when the concentration of surface SV is lower than 8.3%.SVs generate spin-polarized states near the Fermi level,making them effective HER sites.We demonstrate ML-accelerated HER overpotential predictions for all transition metal SACs on 2D Ga_(2)CoS_(4-x).Using DFT data from 18 SACs,an ML model with high prediction accuracy and reduced computation time was developed.An intrinsic descriptor linking SAC atomic properties to HER overpotential was identified.This study thus provides a framework for screening SACs on 2D materials,enhancing catalyst design.
基金supported by the Science and Technology Project of State Grid Sichuan Electric Power Company Chengdu Power Supply Company under Grant No.521904240005.
文摘This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation.
基金supported in part by the National Natural Science Foundation of China under Grants 62325107,62341107,62261160650,and U23A20272in part by the Beijing Natural Science Foundation under Grant L222002.
文摘In this paper,we propose a sub-6GHz channel assisted hybrid beamforming(HBF)for mmWave system under both line-of-sight(LOS)and non-line-of-sight(NLOS)scenarios without mmWave channel estimation.Meanwhile,we resort to the selfsupervised approach to eliminate the need for labels,thus avoiding the accompanied high cost of data collection and annotation.We first construct the dense connection network(DCnet)with three modules:the feature extraction module for extracting channel characteristic from a large amount of channel data,the feature fusion module for combining multidimensional features,and the prediction module for generating the HBF matrices.Next,we establish a lightweight network architecture,named as LDnet,to reduce the number of model parameters and computational complexity.The proposed sub-6GHz assisted approach eliminates mmWave pilot resources compared to the method using mmWave channel information directly.The simulation results indicate that the proposed DCnet and LDnet can achieve the spectral efficiency that is superior to the traditional orthogonal matching pursuit(OMP)algorithm by 13.66% and 10.44% under LOS scenarios and by 32.35% and 27.75% under NLOS scenarios,respectively.Moreover,the LDnet achieves 98.52% reduction in the number of model parameters and 22.93% reduction in computational complexity compared to DCnet.
基金Project supported by the National Natural Science Foundation of China (Grant Nos. 12222506, 12347102, 12447164, and 12174184)。
文摘Accurately estimating protein–ligand binding free energy is crucial for drug design and biophysics, yet remains a challenging task. In this study, we applied the screening molecular mechanics/Poisson–Boltzmann surface area(MM/PBSA)method in combination with various machine learning techniques to compute the binding free energies of protein–ligand interactions. Our results demonstrate that machine learning outperforms direct screening MM/PBSA calculations in predicting protein–ligand binding free energies. Notably, the random forest(RF) method exhibited the best predictive performance,with a Pearson correlation coefficient(rp) of 0.702 and a mean absolute error(MAE) of 1.379 kcal/mol. Furthermore, we analyzed feature importance rankings in the gradient boosting(GB), adaptive boosting(Ada Boost), and RF methods, and found that feature selection significantly impacted predictive performance. In particular, molecular weight(MW) and van der Waals(VDW) energies played a decisive role in the prediction. Overall, this study highlights the potential of combining machine learning methods with screening MM/PBSA for accurately predicting binding free energies in biosystems.
文摘Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems.
基金National Natural Science Foundation of China(Nos.52275346 and 52075287)Tsinghua University Initiative Scientific Research Program(20221080070).
文摘The recent wave of the artificial intelligence(AI)revolution has aroused unprecedented interest in the intelligentialize of human society.As an essential component that bridges the physical world and digital signals,flexible sensors are evolving from a single sensing element to a smarter system,which is capable of highly efficient acquisition,analysis,and even perception of vast,multifaceted data.While challenging from a manual perspective,the development of intelligent flexible sensing has been remarkably facilitated owing to the rapid advances of brain-inspired AI innovations from both the algorithm(machine learning)and the framework(artificial synapses)level.This review presents the recent progress of the emerging AI-driven,intelligent flexible sensing systems.The basic concept of machine learning and artificial synapses are introduced.The new enabling features induced by the fusion of AI and flexible sensing are comprehensively reviewed,which significantly advances the applications such as flexible sensory systems,soft/humanoid robotics,and human activity monitoring.As two of the most profound innovations in the twenty-first century,the deep incorporation of flexible sensing and AI technology holds tremendous potential for creating a smarter world for human beings.