期刊文献+
共找到41,476篇文章
< 1 2 250 >
每页显示 20 50 100
A Rapid Adaptation Approach for Dynamic Air‑Writing Recognition Using Wearable Wristbands with Self‑Supervised Contrastive Learning
1
作者 Yunjian Guo Kunpeng Li +4 位作者 Wei Yue Nam‑Young Kim Yang Li Guozhen Shen Jong‑Chul Lee 《Nano-Micro Letters》 SCIE EI CAS 2025年第2期417-431,共15页
Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro... Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication. 展开更多
关键词 Wearable wristband Self-supervised contrastive learning Dynamic gesture Air-writing Human-machine interaction
在线阅读 下载PDF
High-throughput screening of CO_(2) cycloaddition MOF catalyst with an explainable machine learning model
2
作者 Xuefeng Bai Yi Li +3 位作者 Yabo Xie Qiancheng Chen Xin Zhang Jian-Rong Li 《Green Energy & Environment》 SCIE EI CAS 2025年第1期132-138,共7页
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str... The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction. 展开更多
关键词 Metal-organic frameworks High-throughput screening Machine learning Explainable model CO_(2)cycloaddition
在线阅读 下载PDF
Multi-model ensemble learning for battery state-of-health estimation:Recent advances and perspectives
3
作者 Chuanping Lin Jun Xu +4 位作者 Delong Jiang Jiayang Hou Ying Liang Zhongyue Zou Xuesong Mei 《Journal of Energy Chemistry》 2025年第1期739-759,共21页
The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational per... The burgeoning market for lithium-ion batteries has stimulated a growing need for more reliable battery performance monitoring. Accurate state-of-health(SOH) estimation is critical for ensuring battery operational performance. Despite numerous data-driven methods reported in existing research for battery SOH estimation, these methods often exhibit inconsistent performance across different application scenarios. To address this issue and overcome the performance limitations of individual data-driven models,integrating multiple models for SOH estimation has received considerable attention. Ensemble learning(EL) typically leverages the strengths of multiple base models to achieve more robust and accurate outputs. However, the lack of a clear review of current research hinders the further development of ensemble methods in SOH estimation. Therefore, this paper comprehensively reviews multi-model ensemble learning methods for battery SOH estimation. First, existing ensemble methods are systematically categorized into 6 classes based on their combination strategies. Different realizations and underlying connections are meticulously analyzed for each category of EL methods, highlighting distinctions, innovations, and typical applications. Subsequently, these ensemble methods are comprehensively compared in terms of base models, combination strategies, and publication trends. Evaluations across 6 dimensions underscore the outstanding performance of stacking-based ensemble methods. Following this, these ensemble methods are further inspected from the perspectives of weighted ensemble and diversity, aiming to inspire potential approaches for enhancing ensemble performance. Moreover, addressing challenges such as base model selection, measuring model robustness and uncertainty, and interpretability of ensemble models in practical applications is emphasized. Finally, future research prospects are outlined, specifically noting that deep learning ensemble is poised to advance ensemble methods for battery SOH estimation. The convergence of advanced machine learning with ensemble learning is anticipated to yield valuable avenues for research. Accelerated research in ensemble learning holds promising prospects for achieving more accurate and reliable battery SOH estimation under real-world conditions. 展开更多
关键词 Lithium-ion battery State-of-health estimation DATA-DRIVEN Machine learning Ensemble learning Ensemble diversity
在线阅读 下载PDF
Classifying extended,localized and critical states in quasiperiodic lattices via unsupervised learning
4
作者 Bohan Zheng Siyu Zhu +1 位作者 Xingping Zhou Tong Liu 《Chinese Physics B》 2025年第1期422-427,共6页
Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Fi... Classification of quantum phases is one of the most important areas of research in condensed matter physics.In this work,we obtain the phase diagram of one-dimensional quasiperiodic models via unsupervised learning.Firstly,we choose two advanced unsupervised learning algorithms,namely,density-based spatial clustering of applications with noise(DBSCAN)and ordering points to identify the clustering structure(OPTICS),to explore the distinct phases of the Aubry–André–Harper model and the quasiperiodic p-wave model.The unsupervised learning results match well with those obtained through traditional numerical diagonalization.Finally,we assess similarity across different algorithms and find that the highest degree of similarity between the results of unsupervised learning algorithms and those of traditional algorithms exceeds 98%.Our work sheds light on applications of unsupervised learning for phase classification. 展开更多
关键词 quantum phase QUASIPERIODIC machine learning
在线阅读 下载PDF
Efficient Spatio-Temporal Predictive Learning for Massive MIMO CSI Prediction
5
作者 CHENG Jiaming CHEN Wei +1 位作者 LI Lun AI Bo 《ZTE Communications》 2025年第1期3-10,共8页
Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditiona... Accurate channel state information(CSI)is crucial for 6G wireless communication systems to accommodate the growing demands of mobile broadband services.In massive multiple-input multiple-output(MIMO)systems,traditional CSI feedback approaches face challenges such as performance degradation due to feedback delay and channel aging caused by user mobility.To address these issues,we propose a novel spatio-temporal predictive network(STPNet)that jointly integrates CSI feedback and prediction modules.STPNet employs stacked Inception modules to learn the spatial correlation and temporal evolution of CSI,which captures both the local and the global spatiotemporal features.In addition,the signal-to-noise ratio(SNR)adaptive module is designed to adapt flexibly to diverse feedback channel conditions.Simulation results demonstrate that STPNet outperforms existing channel prediction methods under various channel conditions. 展开更多
关键词 massive MIMO deep learning CSI prediction CSI feedback
在线阅读 下载PDF
Early identification of high-risk patients admitted to emergency departments using vital signs and machine learning
6
作者 Qingyuan Liu Yixin Zhang +10 位作者 Jian Sun Kaipeng Wang Yueguo Wang Yulan Wang Cailing Ren Yan Wang Jiashan Zhu Shusheng Zhou Mengping Zhang Yinglei Lai Kui Jin 《World Journal of Emergency Medicine》 2025年第2期113-120,共8页
BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early pre... BACKGROUND:Rapid and accurate identification of high-risk patients in the emergency departments(EDs)is crucial for optimizing resource allocation and improving patient outcomes.This study aimed to develop an early prediction model for identifying high-risk patients in EDs using initial vital sign measurements.METHODS:This retrospective cohort study analyzed initial vital signs from the Chinese Emergency Triage,Assessment,and Treatment(CETAT)database,which was collected between January 1^(st),2020,and June 25^(th),2023.The primary outcome was the identification of high-risk patients needing immediate treatment.Various machine learning methods,including a deep-learningbased multilayer perceptron(MLP)classifier were evaluated.Model performance was assessed using the area under the receiver operating characteristic curve(AUC-ROC).AUC-ROC values were reported for three scenarios:a default case,a scenario requiring sensitivity greater than 0.8(Scenario I),and a scenario requiring specificity greater than 0.8(Scenario II).SHAP values were calculated to determine the importance of each predictor within the MLP model.RESULTS:A total of 38,797 patients were analyzed,of whom 18.2%were identified as high-risk.Comparative analysis of the predictive models for high-risk patients showed AUC-ROC values ranging from 0.717 to 0.738,with the MLP model outperforming logistic regression(LR),Gaussian Naive Bayes(GNB),and the National Early Warning Score(NEWS).SHAP value analysis identified coma state,peripheral capillary oxygen saturation(SpO_(2)),and systolic blood pressure as the top three predictive factors in the MLP model,with coma state exerting the most contribution.CONCLUSION:Compared with other methods,the MLP model with initial vital signs demonstrated optimal prediction accuracy,highlighting its potential to enhance clinical decision-making in triage in the EDs. 展开更多
关键词 Machine learning TRIAGE Emergency medicine Decision support systems
在线阅读 下载PDF
Learning complex nonlinear physical systems using wavelet neural operators
7
作者 Yanan Guo Xiaoqun Cao +1 位作者 Hongze Leng Junqiang Song 《Chinese Physics B》 2025年第3期461-472,共12页
Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal ro... Nonlinear science is a fundamental area of physics research that investigates complex dynamical systems which are often characterized by high sensitivity and nonlinear behaviors.Numerical simulations play a pivotal role in nonlinear science,serving as a critical tool for revealing the underlying principles governing these systems.In addition,they play a crucial role in accelerating progress across various fields,such as climate modeling,weather forecasting,and fluid dynamics.However,their high computational cost limits their application in high-precision or long-duration simulations.In this study,we propose a novel data-driven approach for simulating complex physical systems,particularly turbulent phenomena.Specifically,we develop an efficient surrogate model based on the wavelet neural operator(WNO).Experimental results demonstrate that the enhanced WNO model can accurately simulate small-scale turbulent flows while using lower computational costs.In simulations of complex physical fields,the improved WNO model outperforms established deep learning models,such as U-Net,Res Net,and the Fourier neural operator(FNO),in terms of accuracy.Notably,the improved WNO model exhibits exceptional generalization capabilities,maintaining stable performance across a wide range of initial conditions and high-resolution scenarios without retraining.This study highlights the significant potential of the enhanced WNO model for simulating complex physical systems,providing strong evidence to support the development of more efficient,scalable,and high-precision simulation techniques. 展开更多
关键词 nonlinear science TURBULENCE deep learning wavelet neural operator
在线阅读 下载PDF
Robust Transmission Design for Federated Learning Through Over-the-Air Computation
8
作者 Hamideh Zamanpour Abyaneh Saba Asaad Amir Masoud Rabiei 《China Communications》 2025年第3期65-75,共11页
Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission sche... Over-the-air computation(AirComp)enables federated learning(FL)to rapidly aggregate local models at the central server using waveform superposition property of wireless channel.In this paper,a robust transmission scheme for an AirCompbased FL system with imperfect channel state information(CSI)is proposed.To model CSI uncertainty,an expectation-based error model is utilized.The main objective is to maximize the number of selected devices that meet mean-squared error(MSE)requirements for model broadcast and model aggregation.The problem is formulated as a combinatorial optimization problem and is solved in two steps.First,the priority order of devices is determined by a sparsity-inducing procedure.Then,a feasibility detection scheme is used to select the maximum number of devices to guarantee that the MSE requirements are met.An alternating optimization(AO)scheme is used to transform the resulting nonconvex problem into two convex subproblems.Numerical results illustrate the effectiveness and robustness of the proposed scheme. 展开更多
关键词 federated learning imperfect CSI optimization over-the-air computing robust design
在线阅读 下载PDF
Machine learning empowers efficient design of ternary organic solar cells with PM6 donor
9
作者 Kiran A.Nirmal Tukaram D.Dongale +2 位作者 Santosh S.Sutar Atul C.Khot Tae Geun Kim 《Journal of Energy Chemistry》 2025年第1期337-347,共11页
Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, ex... Organic solar cells(OSCs) hold great potential as a photovoltaic technology for practical applications.However, the traditional experimental trial-and-error method for designing and engineering OSCs can be complex, expensive, and time-consuming. Machine learning(ML) techniques enable the proficient extraction of information from datasets, allowing the development of realistic models that are capable of predicting the efficacy of materials with commendable accuracy. The PM6 donor has great potential for high-performance OSCs. However, it is crucial for the rational design of a ternary blend to accurately forecast the power conversion efficiency(PCE) of ternary OSCs(TOSCs) based on a PM6 donor.Accordingly, we collected the device parameters of PM6-based TOSCs and evaluated the feature importance of their molecule descriptors to develop predictive models. In this study, we used five different ML algorithms for analysis and prediction. For the analysis, the classification and regression tree provided different rules, heuristics, and patterns from the heterogeneous dataset. The random forest algorithm outperforms other prediction ML algorithms in predicting the output performance of PM6-based TOSCs. Finally, we validated the ML outcomes by fabricating PM6-based TOSCs. Our study presents a rapid strategy for assessing a high PCE while elucidating the substantial influence of diverse descriptors. 展开更多
关键词 Machine learning Ternary organic solarcells PM6 donor PCE
在线阅读 下载PDF
Comprehensive review of advances in machine-learning-driven optimization and characterization of perovskite materials for photovoltaic devices
10
作者 Bonghyun Jo Wenning Chen Hyun Suk Jung 《Journal of Energy Chemistry》 2025年第2期298-323,I0007,共27页
Perovskite solar cells(PSCs)have developed rapidly,positioning them as potential candidates for nextgeneration renewable energy sources.However,conventional trial-and-error approaches and the vast compositional parame... Perovskite solar cells(PSCs)have developed rapidly,positioning them as potential candidates for nextgeneration renewable energy sources.However,conventional trial-and-error approaches and the vast compositional parameter space continue to pose challenges in the pursuit of exceptional performance and high stability of perovskite-based optoelectronics.The increasing demand for novel materials in optoelectronic devices and establishment of substantial databases has enabled data-driven machinelearning(ML)approaches to swiftly advance in the materials field.This review succinctly outlines the fundamental ML procedures,techniques,and recent breakthroughs,particularly in predicting the physical characteristics of perovskite materials.Moreover,it highlights research endeavors aimed at optimizing and screening materials to enhance the efficiency and stability of PSCs.Additionally,this review highlights recent efforts in using characterization data for ML,exploring their correlations with material properties and device performance,which are actively being researched,but they have yet to receive significant attention.Lastly,we provide future perspectives,such as leveraging Large Language Models(LLMs)and text-mining,to expedite the discovery of novel perovskite materials and expand their utilization across various optoelectronic fields. 展开更多
关键词 Perovskite solar cell Data-driven machine learning CHARACTERIZATION Perovskite materials
在线阅读 下载PDF
Enhanced battery life prediction with reduced data demand via semi-supervised representation learning
11
作者 Liang Ma Jinpeng Tian +2 位作者 Tieling Zhang Qinghua Guo Chi Yung Chung 《Journal of Energy Chemistry》 2025年第2期524-534,I0011,共12页
Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlo... Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices. 展开更多
关键词 Lithium-ion batteries Battery degradation Remaining useful life Semi-supervised learning
在线阅读 下载PDF
Significant increase in thermal conductivity of cathode material LiFePO_(4) by Na substitution:A machine learning interatomic potential-assisted investigation
12
作者 Shi-Yi Li Qian Liu +2 位作者 Yu-Jia Zeng Guofeng Xie Wu-Xing Zhou 《Chinese Physics B》 2025年第2期463-468,共6页
LiFePO_(4) is a cathode material with good thermal stability,but low thermal conductivity is a critical problem.In this study,we employ a machine learning potential approach based on first-principles methods combined ... LiFePO_(4) is a cathode material with good thermal stability,but low thermal conductivity is a critical problem.In this study,we employ a machine learning potential approach based on first-principles methods combined with the Boltzmann transport theory to investigate the influence of Na substitution on the thermal conductivity of LiFePO_(4) and the impact of Li-ion de-embedding on the thermal conductivity of Li_(3/4)Na_(1/4)FePO_(4),with the aim of enhancing heat dissipation in Li-ion batteries.The results show a significant increase in thermal conductivity due to an increase in phonon group velocity and a decrease in phonon anharmonic scattering by Na substitution.In addition,the thermal conductivity increases significantly with decreasing Li-ion concentration due to the increase in phonon lifetime.Our work guides the improvement of the thermal conductivity of Li FePO_4,emphasizing the crucial roles of both substitution and Li-ion detachment/intercalation for the thermal management of electrochemical energy storage devices. 展开更多
关键词 lattice thermal conductivity machine learning potential LiFePO_(4)
在线阅读 下载PDF
Continuum estimation in low-resolution gamma-ray spectra based on deep learning
13
作者 Ri Zhao Li-Ye Liu +5 位作者 Xin Liu Zhao-Xing Liu Run-Cheng Liang Ren-Jing Ling-Hu Jing Zhang Fa-Guo Chen 《Nuclear Science and Techniques》 2025年第2期5-17,共13页
In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated ... In this study,an end-to-end deep learning method is proposed to improve the accuracy of continuum estimation in low-resolution gamma-ray spectra.A novel process for generating the theoretical continuum of a simulated spectrum is established,and a convolutional neural network consisting of 51 layers and more than 105 parameters is constructed to directly predict the entire continuum from the extracted global spectrum features.For testing,an in-house NaI-type whole-body counter is used,and 106 training spectrum samples(20%of which are reserved for testing)are generated using Monte Carlo simulations.In addition,the existing fitting,step-type,and peak erosion methods are selected for comparison.The proposed method exhibits excellent performance,as evidenced by its activity error distribution and the smallest mean activity error of 1.5%among the evaluated methods.Additionally,a validation experiment is performed using a whole-body counter to analyze a human physical phantom containing four radionuclides.The largest activity error of the proposed method is−5.1%,which is considerably smaller than those of the comparative methods,confirming the test results.The multiscale feature extraction and nonlinear relation modeling in the proposed method establish a novel approach for accurate and convenient continuum estimation in a low-resolution gamma-ray spectrum.Thus,the proposed method is promising for accurate quantitative radioactivity analysis in practical applications. 展开更多
关键词 Gamma-ray spectrum Continuum estimation Deep learning Convolutional neural network End-to-end prediction
在线阅读 下载PDF
Few-shot learning for screening 2D Ga_(2)CoS_(4-x) supported single-atom catalysts for hydrogen production
14
作者 Nabil Khossossi Poulumi Dey 《Journal of Energy Chemistry》 2025年第1期665-673,共9页
Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-at... Hydrogen generation and related energy applications heavily rely on the hydrogen evolution reaction(HER),which faces challenges of slow kinetics and high overpotential.Efficient electrocatalysts,particularly single-atom catalysts (SACs) on two-dimensional (2D) materials,are essential.This study presents a few-shot machine learning (ML) assisted high-throughput screening of 2D septuple-atomic-layer Ga_(2)CoS_(4-x)supported SACs to predict HER catalytic activity.Initially,density functional theory (DFT)calculations showed that 2D Ga_(2)CoS4is inactive for HER.However,defective Ga_(2)CoS_(4-x)(x=0–0.25)monolayers exhibit excellent HER activity due to surface sulfur vacancies (SVs),with predicted overpotentials (0–60 mV) comparable to or lower than commercial Pt/C,which typically exhibits an overpotential of around 50 m V in the acidic electrolyte,when the concentration of surface SV is lower than 8.3%.SVs generate spin-polarized states near the Fermi level,making them effective HER sites.We demonstrate ML-accelerated HER overpotential predictions for all transition metal SACs on 2D Ga_(2)CoS_(4-x).Using DFT data from 18 SACs,an ML model with high prediction accuracy and reduced computation time was developed.An intrinsic descriptor linking SAC atomic properties to HER overpotential was identified.This study thus provides a framework for screening SACs on 2D materials,enhancing catalyst design. 展开更多
关键词 Hydrogen production ELECTROCATALYST 2D material Density functional theory Machine learning Surface sulfur vacancy
在线阅读 下载PDF
Adaptive multi-agent reinforcement learning for dynamic pricing and distributed energy management in virtual power plant networks
15
作者 Jian-Dong Yao Wen-Bin Hao +3 位作者 Zhi-Gao Meng Bo Xie Jian-Hua Chen Jia-Qi Wei 《Journal of Electronic Science and Technology》 2025年第1期35-59,共25页
This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards grea... This paper presents a novel approach to dynamic pricing and distributed energy management in virtual power plant(VPP)networks using multi-agent reinforcement learning(MARL).As the energy landscape evolves towards greater decentralization and renewable integration,traditional optimization methods struggle to address the inherent complexities and uncertainties.Our proposed MARL framework enables adaptive,decentralized decision-making for both the distribution system operator and individual VPPs,optimizing economic efficiency while maintaining grid stability.We formulate the problem as a Markov decision process and develop a custom MARL algorithm that leverages actor-critic architectures and experience replay.Extensive simulations across diverse scenarios demonstrate that our approach consistently outperforms baseline methods,including Stackelberg game models and model predictive control,achieving an 18.73%reduction in costs and a 22.46%increase in VPP profits.The MARL framework shows particular strength in scenarios with high renewable energy penetration,where it improves system performance by 11.95%compared with traditional methods.Furthermore,our approach demonstrates superior adaptability to unexpected events and mis-predictions,highlighting its potential for real-world implementation. 展开更多
关键词 Distributed energy management Dynamic pricing Multi-agent reinforcement learning Renewable energy integration Virtual power plants
在线阅读 下载PDF
Sub-6GHz Assisted mmWave Hybrid Beamforming with Self-Supervised Learning
16
作者 Li Hongyao Gao Feifei +3 位作者 Lin Bo Wu Huihui Gu Yuantao Xi Jianxiang 《China Communications》 2025年第1期158-170,共13页
In this paper,we propose a sub-6GHz channel assisted hybrid beamforming(HBF)for mmWave system under both line-of-sight(LOS)and non-line-of-sight(NLOS)scenarios without mmWave channel estimation.Meanwhile,we resort to ... In this paper,we propose a sub-6GHz channel assisted hybrid beamforming(HBF)for mmWave system under both line-of-sight(LOS)and non-line-of-sight(NLOS)scenarios without mmWave channel estimation.Meanwhile,we resort to the selfsupervised approach to eliminate the need for labels,thus avoiding the accompanied high cost of data collection and annotation.We first construct the dense connection network(DCnet)with three modules:the feature extraction module for extracting channel characteristic from a large amount of channel data,the feature fusion module for combining multidimensional features,and the prediction module for generating the HBF matrices.Next,we establish a lightweight network architecture,named as LDnet,to reduce the number of model parameters and computational complexity.The proposed sub-6GHz assisted approach eliminates mmWave pilot resources compared to the method using mmWave channel information directly.The simulation results indicate that the proposed DCnet and LDnet can achieve the spectral efficiency that is superior to the traditional orthogonal matching pursuit(OMP)algorithm by 13.66% and 10.44% under LOS scenarios and by 32.35% and 27.75% under NLOS scenarios,respectively.Moreover,the LDnet achieves 98.52% reduction in the number of model parameters and 22.93% reduction in computational complexity compared to DCnet. 展开更多
关键词 hybrid beamforming mmWave selfsupervised learning sub-6GHz assisted mmWave transmission sub-6GHz channel
在线阅读 下载PDF
Improving performance of screening MM/PBSA in protein–ligand interactions via machine learning
17
作者 Yuan-Qiang Chen Yao Xu +1 位作者 Yu-Qiang Ma Hong-Ming Ding 《Chinese Physics B》 2025年第1期486-496,共11页
Accurately estimating protein–ligand binding free energy is crucial for drug design and biophysics, yet remains a challenging task. In this study, we applied the screening molecular mechanics/Poisson–Boltzmann surfa... Accurately estimating protein–ligand binding free energy is crucial for drug design and biophysics, yet remains a challenging task. In this study, we applied the screening molecular mechanics/Poisson–Boltzmann surface area(MM/PBSA)method in combination with various machine learning techniques to compute the binding free energies of protein–ligand interactions. Our results demonstrate that machine learning outperforms direct screening MM/PBSA calculations in predicting protein–ligand binding free energies. Notably, the random forest(RF) method exhibited the best predictive performance,with a Pearson correlation coefficient(rp) of 0.702 and a mean absolute error(MAE) of 1.379 kcal/mol. Furthermore, we analyzed feature importance rankings in the gradient boosting(GB), adaptive boosting(Ada Boost), and RF methods, and found that feature selection significantly impacted predictive performance. In particular, molecular weight(MW) and van der Waals(VDW) energies played a decisive role in the prediction. Overall, this study highlights the potential of combining machine learning methods with screening MM/PBSA for accurately predicting binding free energies in biosystems. 展开更多
关键词 molecular mechanics/Poisson-Boltzmann surface area(MM/PBSA) binding free energy machine learning protein-ligand interaction
在线阅读 下载PDF
基于改进Q-learning算法智能仓储AGV路径规划
18
作者 耿华 冯涛 《现代信息科技》 2025年第2期171-175,共5页
作为智能物流系统中重要运输工具的自动引导车(Automated Guided Vehicle,AGV),AGV路径规划与避障算法是移动机器人领域重要研究热点之一。为了解决现有仓储环境下的AGV在运用Q-learning算法进行路径规划时的前期收敛速度慢且探索利用... 作为智能物流系统中重要运输工具的自动引导车(Automated Guided Vehicle,AGV),AGV路径规划与避障算法是移动机器人领域重要研究热点之一。为了解决现有仓储环境下的AGV在运用Q-learning算法进行路径规划时的前期收敛速度慢且探索利用不平衡的问题,提出一种结合引力势场改进Q-learning的算法,同时对贪婪系数进行动态调整。首先,针对传统的Q-learning算法规划时学习效率低问题,构建从AGV到目标点的引力场,引导AGV始终朝着目标点方向移动,减少算法初期盲目性,加强初始阶段的目标性。然后,解决算法探索利用平衡问题,对贪婪系数进行动态改进。仿真实验表明,探索速率提升的同时,算法稳定性也有一定的提升。 展开更多
关键词 Q-learning算法 强化学习 人工势场算法 AGV 路径规划
在线阅读 下载PDF
基于Q-learning算法的机场航班延误预测
19
作者 刘琪 乐美龙 《航空计算技术》 2025年第1期28-32,共5页
将改进的深度信念网络(DBN)和Q-learning算法结合建立组合预测模型。首先将延误预测问题建模为一个标准的马尔可夫决策过程,使用改进的深度信念网络来选择关键特征。经深度信念网络分析,从46个特征变量中选择出27个关键特征类别作为延... 将改进的深度信念网络(DBN)和Q-learning算法结合建立组合预测模型。首先将延误预测问题建模为一个标准的马尔可夫决策过程,使用改进的深度信念网络来选择关键特征。经深度信念网络分析,从46个特征变量中选择出27个关键特征类别作为延误时间的最终解释变量输入Q-learning算法中,从而实现对航班延误的实时预测。使用北京首都国际机场航班数据进行测试实验,实验结果表明,所提出的模型可以有效预测航班延误,平均误差为4.05 min。将提出的组合算法性能与4种基准方法进行比较,基于DBN的Q-learning算法的延误预测准确性高于另外四种算法,具有较高的预测精度。 展开更多
关键词 航空运输 航班延误预测 深度信念网络 Q-learning 航班延误
在线阅读 下载PDF
M-learning结合CBL在消化科规培教学中的探讨及应用
20
作者 洪静 程中华 +3 位作者 余金玲 王韶英 嵇贝纳 冯珍 《中国卫生产业》 2024年第2期203-205,共3页
目的探究移动学习平台(M-learning,ML)结合案例教学(Case-based Learning,CBL)在消化科住院医师规范化培训(简称规培)教学中的应用效果。方法选取2021年1月—2023年1月于上海市徐汇区中心医院消化科参加规培学习的80名医师作为研究对象... 目的探究移动学习平台(M-learning,ML)结合案例教学(Case-based Learning,CBL)在消化科住院医师规范化培训(简称规培)教学中的应用效果。方法选取2021年1月—2023年1月于上海市徐汇区中心医院消化科参加规培学习的80名医师作为研究对象,将其按照随机数表法分为研究组和对照组,每组40名。对照组给予传统讲授式教学法,研究组给予M-learning结合CBL教学法,对比两组医师的理论考试成绩、实践技能考试成绩和学习满意度。结果研究组的理论成绩和实践技能考试成绩均高于对照组,差异具有统计学意义(P均<0.05);研究组的学习满意度明显高于对照组,差异具有统计学意义(P<0.05)。结论将Mlearning结合CBL教学法应用于消化科规培教学中,不仅能够提升医师的理论考试成绩和实践技能考试成绩,还能够有效提高医师学习满意度。 展开更多
关键词 M-learning CBL 消化科 规培教学
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部