期刊文献+
共找到41,439篇文章
< 1 2 250 >
每页显示 20 50 100
A Rapid Adaptation Approach for Dynamic Air‑Writing Recognition Using Wearable Wristbands with Self‑Supervised Contrastive Learning
1
作者 Yunjian Guo Kunpeng Li +4 位作者 Wei Yue Nam‑Young Kim Yang Li Guozhen Shen Jong‑Chul Lee 《Nano-Micro Letters》 SCIE EI CAS 2025年第2期417-431,共15页
Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro... Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication. 展开更多
关键词 Wearable wristband Self-supervised contrastive learning Dynamic gesture Air-writing Human-machine interaction
在线阅读 下载PDF
High-throughput screening of CO_(2) cycloaddition MOF catalyst with an explainable machine learning model
2
作者 Xuefeng Bai Yi Li +3 位作者 Yabo Xie Qiancheng Chen Xin Zhang Jian-Rong Li 《Green Energy & Environment》 SCIE EI CAS 2025年第1期132-138,共7页
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str... The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction. 展开更多
关键词 Metal-organic frameworks High-throughput screening Machine learning Explainable model CO_(2)cycloaddition
在线阅读 下载PDF
FedCLCC:A personalized federated learning algorithm for edge cloud collaboration based on contrastive learning and conditional computing
3
作者 Kangning Yin Xinhui Ji +1 位作者 Yan Wang Zhiguo Wang 《Defence Technology(防务技术)》 2025年第1期80-93,共14页
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ... Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms. 展开更多
关键词 Federated learning Statistical heterogeneity Personalized model Conditional computing Contrastive learning
在线阅读 下载PDF
Multi-model ensemble learning for battery state-of-health estimation:Recent advances and perspectives
4
作者 Chuanping Lin Jun Xu +4 位作者 Delong Jiang Jiayang Hou Ying Liang Zhongyue Zou Xuesong Mei 《Journal of Energy Chemistry》 2025年第1期739-759,共21页
The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational per... The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions. 展开更多
关键词 Lithium-ion battery State-of-health estimation DATA-DRIVEN Machine learning Ensemble learning Ensemble diversity
在线阅读 下载PDF
Accurate prediction of essential proteins using ensemble machine learning
5
作者 Dezhi Lu Hao Wu +3 位作者 Yutong Hou Yuncheng Wu Yuanyuan Liu Jinwu Wang 《Chinese Physics B》 2025年第1期108-115,共8页
Essential proteins are crucial for biological processes and can be identified through both experimental and computational methods.While experimental approaches are highly accurate,they often demand extensive time and ... Essential proteins are crucial for biological processes and can be identified through both experimental and computational methods.While experimental approaches are highly accurate,they often demand extensive time and resources.To address these challenges,we present a computational ensemble learning framework designed to identify essential proteins more efficiently.Our method begins by using node2vec to transform proteins in the protein–protein interaction(PPI)network into continuous,low-dimensional vectors.We also extract a range of features from protein sequences,including graph-theory-based,information-based,compositional,and physiochemical attributes.Additionally,we leverage deep learning techniques to analyze high-dimensional position-specific scoring matrices(PSSMs)and capture evolutionary information.We then combine these features for classification using various machine learning algorithms.To enhance performance,we integrate the outputs of these algorithms through ensemble methods such as voting,weighted averaging,and stacking.This approach effectively addresses data imbalances and improves both robustness and accuracy.Our ensemble learning framework achieves an AUC of 0.960 and an accuracy of 0.9252,outperforming other computational methods.These results demonstrate the effectiveness of our approach in accurately identifying essential proteins and highlight its superior feature extraction capabilities. 展开更多
关键词 protein-protein interaction(PPI) essential proteins deep learning ensemble learning
在线阅读 下载PDF
Database of ternary amorphous alloys based on machine learning
6
作者 Xuhe Gong Ran Li +2 位作者 Ruijuan Xiao Tao Zhang Hong Li 《Chinese Physics B》 2025年第1期129-133,共5页
The unique long-range disordered atomic arrangement inherent in amorphous materials endows them with a range of superior properties,rendering them highly promising for applications in catalysis,medicine,and battery te... The unique long-range disordered atomic arrangement inherent in amorphous materials endows them with a range of superior properties,rendering them highly promising for applications in catalysis,medicine,and battery technology,among other fields.Since not all materials can be synthesized into an amorphous structure,the composition design of amorphous materials holds significant importance.Machine learning offers a valuable alternative to traditional“trial-anderror”methods by predicting properties through experimental data,thus providing efficient guidance in material design.In this study,we develop a machine learning workflow to predict the critical casting diameter,glass transition temperature,and Young's modulus for 45 ternary reported amorphous alloy systems.The predicted results have been organized into a database,enabling direct retrieval of predicted values based on compositional information.Furthermore,the applications of high glass forming ability region screening for specified system,multi-property target system screening and high glass forming ability region search through iteration are also demonstrated.By utilizing machine learning predictions,researchers can effectively narrow the experimental scope and expedite the exploration of compositions. 展开更多
关键词 amorphous alloys machine learning DATABASE
在线阅读 下载PDF
Classifying extended,localized and critical states in quasiperiodic lattices via unsupervised learning
7
作者 Bohan Zheng Siyu Zhu +1 位作者 Xingping Zhou Tong Liu 《Chinese Physics B》 2025年第1期422-427,共6页
Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Fi... Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Firstly,we choose two advanced unsupervised learning algorithms,namely,density-based spatial clustering of applications with noise(DBSCAN)and ordering points to identify the clustering structure(OPTICS),to explore the distinct phases of the Aubry–André–Harper model and the quasiperiodic p-wave model.The unsupervised learning results match well with those obtained through traditional numerical diagonalization.Finally,we assess similarity across different algorithms and find that the highest degree of similarity between the results of unsupervised learning algorithms and those of traditional algorithms exceeds 98%.Our work sheds light on applications of unsupervised learning for phase classification. 展开更多
关键词 quantum phase QUASIPERIODIC machine learning
在线阅读 下载PDF
Efficient Spatio-Temporal Predictive Learning for Massive MIMO CSI Prediction
8
作者 CHENG Jiaming CHEN Wei +1 位作者 LI Lun AI Bo 《ZTE Communications》 2025年第1期3-10,共8页
Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditiona... Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditional CSI feedback approaches face challenges such as performance degradation due to feedback delay and channel aging caused by user mobility.To address these issues,we propose a novel spatio-temporal predictive network(STPNet)that jointly integrates CSI feedback and prediction modules.STPNet employs stacked Inception modules to learn the spatial correlation and temporal evolution of CSI,which captures both the local and the global spatiotemporal features.In addition,the signal-to-noise ratio(SNR)adaptive module is designed to adapt flexibly to diverse feedback channel conditions.Simulation results demonstrate that STPNet outperforms existing channel prediction methods under various channel conditions. 展开更多
关键词 massive MIMO deep learning CSI prediction CSI feedback
在线阅读 下载PDF
Learning complex nonlinear physical systems using wavelet neural operators
9
作者 Yanan Guo Xiaoqun Cao +1 位作者 Hongze Leng Junqiang Song 《Chinese Physics B》 2025年第3期461-472,共12页
Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal ro... Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal role in nonlinear science,serving as a critical tool for revealing the underlying principles governing these systems.In addition,they play a crucial role in accelerating progress across various fields,such as climate modeling,weather forecasting,and fluid dynamics.However,their high computational cost limits their application in high-precision or long-duration simulations.In this study,we propose a novel data-driven approach for simulating complex physical systems,particularly turbulent phenomena.Specifically,we develop an efficient surrogate model based on the wavelet neural operator(WNO).Experimental results demonstrate that the enhanced WNO model can accurately simulate small-scale turbulent flows while using lower computational costs.In simulations of complex physical fields,the improved WNO model outperforms established deep learning models,such as U-Net,Res Net,and the Fourier neural operator(FNO),in terms of accuracy.Notably,the improved WNO model exhibits exceptional generalization capabilities,maintaining stable performance across a wide range of initial conditions and high-resolution scenarios without retraining.This study highlights the significant potential of the enhanced WNO model for simulating complex physical systems,providing strong evidence to support the development of more efficient,scalable,and high-precision simulation techniques. 展开更多
关键词 nonlinear science TURBULENCE deep learning wavelet neural operator
在线阅读 下载PDF
Early identification of high-risk patients admitted to emergency departments using vital signs and machine learning
10
作者 Qingyuan Liu Yixin Zhang +10 位作者 Jian Sun Kaipeng Wang Yueguo Wang Yulan Wang Cailing Ren Yan Wang Jiashan Zhu Shusheng Zhou Mengping Zhang Yinglei Lai Kui Jin 《World Journal of Emergency Medicine》 2025年第2期113-120,共8页
BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early pre... BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early prediction model for identifying high-risk patients in EDs using initial vital sign measurements.METHODS:This retrospective cohort study analyzed initial vital signs from the Chinese Emergency Triage,Assessment,and Treatment(CETAT)database,which was collected between January 1^(st),2020,and June 25^(th),2023.The primary outcome was the identification of high-risk patients needing immediate treatment.Various machine learning methods,including a deep-learningbased multilayer perceptron(MLP)classifier were evaluated.Model performance was assessed using the area under the receiver operating characteristic curve(AUC-ROC).AUC-ROC values were reported for three scenarios:a default case,a scenario requiring sensitivity greater than 0.8(Scenario I),and a scenario requiring specificity greater than 0.8(Scenario II).SHAP values were calculated to determine the importance of each predictor within the MLP model.RESULTS:A total of 38,797 patients were analyzed,of whom 18.2%were identified as high-risk.Comparative analysis of the predictive models for high-risk patients showed AUC-ROC values ranging from 0.717 to 0.738,with the MLP model outperforming logistic regression(LR),Gaussian Naive Bayes(GNB),and the National Early Warning Score(NEWS).SHAP value analysis identified coma state,peripheral capillary oxygen saturation(SpO_(2)),and systolic blood pressure as the top three predictive factors in the MLP model,with coma state exerting the most contribution.CONCLUSION:Compared with other methods,the MLP model with initial vital signs demonstrated optimal prediction accuracy,highlighting its potential to enhance clinical decision-making in triage in the EDs. 展开更多
关键词 Machine learning TRIAGE Emergency medicine Decision support systems
在线阅读 下载PDF
Robust Transmission Design for Federated Learning Through Over-the-Air Computation
11
作者 Hamideh Zamanpour Abyaneh Saba Asaad Amir Masoud Rabiei 《China Communications》 2025年第3期65-75,共11页
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche... Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme. 展开更多
关键词 federated learning imperfect CSI optimization over-the-air computing robust design
在线阅读 下载PDF
Comprehensive review of advances in machine-learning-driven optimization and characterization of perovskite materials for photovoltaic devices
12
作者 Bonghyun Jo Wenning Chen Hyun Suk Jung 《Journal of Energy Chemistry》 2025年第2期298-323,I0007,共27页
Perovskite solar cells(PSCs)have developed rapidly,positioning them as potential candidates for nextgeneration renewable energy sources.However,conventional trial-and-error approaches and the vast compositional parame... Perovskite solar cells(PSCs)have developed rapidly,positioning them as potential candidates for nextgeneration renewable energy sources.However,conventional trial-and-error approaches and the vast compositional parameter space continue to pose challenges in the pursuit of exceptional performance and high stability of perovskite-based optoelectronics.The increasing demand for novel materials in optoelectronic devices and establishment of substantial databases has enabled data-driven machinelearning(ML)approaches to swiftly advance in the materials field.This review succinctly outlines the fundamental ML procedures,techniques,and recent breakthroughs,particularly in predicting the physical characteristics of perovskite materials.Moreover,it highlights research endeavors aimed at optimizing and screening materials to enhance the efficiency and stability of PSCs.Additionally,this review highlights recent efforts in using characterization data for ML,exploring their correlations with material properties and device performance,which are actively being researched,but they have yet to receive significant attention.Lastly,we provide future perspectives,such as leveraging Large Language Models(LLMs)and text-mining,to expedite the discovery of novel perovskite materials and expand their utilization across various optoelectronic fields. 展开更多
关键词 Perovskite solar cell Data-driven machine learning CHARACTERIZATION Perovskite materials
在线阅读 下载PDF
Machine learning empowers efficient design of ternary organic solar cells with PM6 donor
13
作者 Kiran A.Nirmal Tukaram D.Dongale +2 位作者 Santosh S.Sutar Atul C.Khot Tae Geun Kim 《Journal of Energy Chemistry》 2025年第1期337-347,共11页
Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, ex... Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, expensive, and time-consuming. Machine learning(ML) techniques enable the proficient extraction of information from datasets, allowing the development of realistic models that are capable of predicting the efficacy of materials with commendable accuracy. The PM6 donor has great potential for high-performance OSCs. However, it is crucial for the rational design of a ternary blend to accurately forecast the power conversion efficiency(PCE) of ternary OSCs(TOSCs) based on a PM6 donor.Accordingly, we collected the device parameters of PM6-based TOSCs and evaluated the feature importance of their molecule descriptors to develop predictive models. In this study, we used five different ML algorithms for analysis and prediction. For the analysis, the classification and regression tree provided different rules, heuristics, and patterns from the heterogeneous dataset. The random forest algorithm outperforms other prediction ML algorithms in predicting the output performance of PM6-based TOSCs. Finally, we validated the ML outcomes by fabricating PM6-based TOSCs. Our study presents a rapid strategy for assessing a high PCE while elucidating the substantial influence of diverse descriptors. 展开更多
关键词 Machine learning Ternary organic solarcells PM6 donor PCE
在线阅读 下载PDF
Enhanced battery life prediction with reduced data demand via semi-supervised representation learning
14
作者 Liang Ma Jinpeng Tian +2 位作者 Tieling Zhang Qinghua Guo Chi Yung Chung 《Journal of Energy Chemistry》 2025年第2期524-534,I0011,共12页
Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlo... Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices. 展开更多
关键词 Lithium-ion batteries Battery degradation Remaining useful life Semi-supervised learning
在线阅读 下载PDF
Significant increase in thermal conductivity of cathode material LiFePO_(4) by Na substitution:A machine learning interatomic potential-assisted investigation
15
作者 Shi-Yi Li Qian Liu +2 位作者 Yu-Jia Zeng Guofeng Xie Wu-Xing Zhou 《Chinese Physics B》 2025年第2期463-468,共6页
LiFePO_(4) is a cathode material with good thermal stability,but low thermal conductivity is a critical problem.In this study,we employ a machine learning potential approach based on first-principles methods combined ... LiFePO_(4) is a cathode material with good thermal stability,but low thermal conductivity is a critical problem.In this study,we employ a machine learning potential approach based on first-principles methods combined with the Boltzmann transport theory to investigate the influence of Na substitution on the thermal conductivity of LiFePO_(4) and the impact of Li-ion de-embedding on the thermal conductivity of Li_(3/4)Na_(1/4)FePO_(4),with the aim of enhancing heat dissipation in Li-ion batteries.The results show a significant increase in thermal conductivity due to an increase in phonon group velocity and a decrease in phonon anharmonic scattering by Na substitution.In addition,the thermal conductivity increases significantly with decreasing Li-ion concentration due to the increase in phonon lifetime.Our work guides the improvement of the thermal conductivity of Li FePO_4,emphasizing the crucial roles of both substitution and Li-ion detachment/intercalation for the thermal management of electrochemical energy storage devices. 展开更多
关键词 lattice thermal conductivity machine learning potential LiFePO_(4)
在线阅读 下载PDF
Continuum estimation in low-resolution gamma-ray spectra based on deep learning
16
作者 Ri Zhao Li-Ye Liu +5 位作者 Xin Liu Zhao-Xing Liu Run-Cheng Liang Ren-Jing Ling-Hu Jing Zhang Fa-Guo Chen 《Nuclear Science and Techniques》 2025年第2期5-17,共13页
In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated ... In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications. 展开更多
关键词 Gamma-ray spectrum Continuum estimation Deep learning Convolutional neural network End-to-end prediction
在线阅读 下载PDF
Few-shot learning for screening 2D Ga_(2)CoS_(4-x) supported single-atom catalysts for hydrogen production
17
作者 Nabil Khossossi Poulumi Dey 《Journal of Energy Chemistry》 2025年第1期665-673,共9页
Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-at... Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-atom catalysts (SACs) on two-dimensional (2D) materials,are essential.This study presents a few-shot machine learning (ML) assisted high-throughput screening of 2D septuple-atomic-layer Ga_(2)CoS_(4-x)supported SACs to predict HER catalytic activity.Initially,density functional theory (DFT)calculations showed that 2D Ga_(2)CoS4is inactive for HER.However,defective Ga_(2)CoS_(4-x)(x=0–0.25)monolayers exhibit excellent HER activity due to surface sulfur vacancies (SVs),with predicted overpotentials (0–60 mV) comparable to or lower than commercial Pt/C,which typically exhibits an overpotential of around 50 m V in the acidic electrolyte,when the concentration of surface SV is lower than 8.3%.SVs generate spin-polarized states near the Fermi level,making them effective HER sites.We demonstrate ML-accelerated HER overpotential predictions for all transition metal SACs on 2D Ga_(2)CoS_(4-x).Using DFT data from 18 SACs,an ML model with high prediction accuracy and reduced computation time was developed.An intrinsic descriptor linking SAC atomic properties to HER overpotential was identified.This study thus provides a framework for screening SACs on 2D materials,enhancing catalyst design. 展开更多
关键词 Hydrogen production ELECTROCATALYST 2D material Density functional theory Machine learning Surface sulfur vacancy
在线阅读 下载PDF
A Multi-Task Learning Framework for Joint Sub-Nyquist Wideband Spectrum Sensing and Modulation Recognition
18
作者 Dong Xin Stefanos Bakirtzis +1 位作者 Zhang Jiliang Zhang Jie 《China Communications》 2025年第1期128-138,共11页
The utilization of millimeter-wave frequencies and cognitive radio(CR)are promising ways to increase the spectral efficiency of wireless communication systems.However,conventional CR spectrum sensing techniques entail... The utilization of millimeter-wave frequencies and cognitive radio(CR)are promising ways to increase the spectral efficiency of wireless communication systems.However,conventional CR spectrum sensing techniques entail sampling the received signal at a Nyquist rate,and they are not viable for wideband signals due to their high cost.This paper expounds on how sub-Nyquist sampling in conjunction with deep learning can be leveraged to remove this limitation.To this end,we propose a multi-task learning(MTL)framework using convolutional neural networks for the joint inference of the underlying narrowband signal number,their modulation scheme,and their location in a wideband spectrum.We demonstrate the effectiveness of the proposed framework for real-world millimeter-wave wideband signals collected by physical devices,exhibiting a 91.7% accuracy in the joint inference task when considering up to two narrowband signals over a wideband spectrum.Ultimately,the proposed data-driven approach enables on-the-fly wideband spectrum sensing,combining accuracy,and computational efficiency,which are indispensable for CR and opportunistic networking. 展开更多
关键词 automated modulation classification cognitive radio convolutional neural networks deep learning spectrum sensing sub-Nyquist sampling
在线阅读 下载PDF
Combining deep reinforcement learning with heuristics to solve the traveling salesman problem
19
作者 Li Hong Yu Liu +1 位作者 Mengqiao Xu Wenhui Deng 《Chinese Physics B》 2025年第1期96-106,共11页
Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs... Recent studies employing deep learning to solve the traveling salesman problem(TSP)have mainly focused on learning construction heuristics.Such methods can improve TSP solutions,but still depend on additional programs.However,methods that focus on learning improvement heuristics to iteratively refine solutions remain insufficient.Traditional improvement heuristics are guided by a manually designed search strategy and may only achieve limited improvements.This paper proposes a novel framework for learning improvement heuristics,which automatically discovers better improvement policies for heuristics to iteratively solve the TSP.Our framework first designs a new architecture based on a transformer model to make the policy network parameterized,which introduces an action-dropout layer to prevent action selection from overfitting.It then proposes a deep reinforcement learning approach integrating a simulated annealing mechanism(named RL-SA)to learn the pairwise selected policy,aiming to improve the 2-opt algorithm's performance.The RL-SA leverages the whale optimization algorithm to generate initial solutions for better sampling efficiency and uses the Gaussian perturbation strategy to tackle the sparse reward problem of reinforcement learning.The experiment results show that the proposed approach is significantly superior to the state-of-the-art learning-based methods,and further reduces the gap between learning-based methods and highly optimized solvers in the benchmark datasets.Moreover,our pre-trained model M can be applied to guide the SA algorithm(named M-SA(ours)),which performs better than existing deep models in small-,medium-,and large-scale TSPLIB datasets.Additionally,the M-SA(ours)achieves excellent generalization performance in a real-world dataset on global liner shipping routes,with the optimization percentages in distance reduction ranging from3.52%to 17.99%. 展开更多
关键词 traveling salesman problem deep reinforcement learning simulated annealing algorithm transformer model whale optimization algorithm
在线阅读 下载PDF
Adaptive multi-agent reinforcement learning for dynamic pricing and distributed energy management in virtual power plant networks
20
作者 Jian-Dong Yao Wen-Bin Hao +3 位作者 Zhi-Gao Meng Bo Xie Jian-Hua Chen Jia-Qi Wei 《Journal of Electronic Science and Technology》 2025年第1期35-59,共25页
This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards grea... This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation. 展开更多
关键词 Distributed energy management Dynamic pricing Multi-agent reinforcement learning Renewable energy integration Virtual power plants
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部