期刊文献+
共找到67,321篇文章
< 1 2 250 >
每页显示 20 50 100
玻尔兹曼优化Q-learning的高速铁路越区切换控制算法 被引量:1
1
作者 陈永 康婕 《控制理论与应用》 北大核心 2025年第4期688-694,共7页
针对5G-R高速铁路越区切换使用固定切换阈值,且忽略了同频干扰、乒乓切换等的影响,导致越区切换成功率低的问题,提出了一种玻尔兹曼优化Q-learning的越区切换控制算法.首先,设计了以列车位置–动作为索引的Q表,并综合考虑乒乓切换、误... 针对5G-R高速铁路越区切换使用固定切换阈值,且忽略了同频干扰、乒乓切换等的影响,导致越区切换成功率低的问题,提出了一种玻尔兹曼优化Q-learning的越区切换控制算法.首先,设计了以列车位置–动作为索引的Q表,并综合考虑乒乓切换、误码率等构建Q-learning算法回报函数;然后,提出玻尔兹曼搜索策略优化动作选择,以提高切换算法收敛性能;最后,综合考虑基站同频干扰的影响进行Q表更新,得到切换判决参数,从而控制切换执行.仿真结果表明:改进算法在不同运行速度和不同运行场景下,较传统算法能有效提高切换成功率,且满足无线通信服务质量QoS的要求. 展开更多
关键词 越区切换 5G-R Q-learning算法 玻尔兹曼优化策略
在线阅读 下载PDF
基于MDP和Q-learning的绿色移动边缘计算任务卸载策略
2
作者 赵宏伟 吕盛凱 +2 位作者 庞芷茜 马子涵 李雨 《河南理工大学学报(自然科学版)》 北大核心 2025年第5期9-16,共8页
目的为了在汽车、空调等制造类工业互联网企业中实现碳中和,利用边缘计算任务卸载技术处理生产设备的任务卸载问题,以减少服务器的中心负载,减少数据中心的能源消耗和碳排放。方法提出一种基于马尔可夫决策过程(Markov decision process... 目的为了在汽车、空调等制造类工业互联网企业中实现碳中和,利用边缘计算任务卸载技术处理生产设备的任务卸载问题,以减少服务器的中心负载,减少数据中心的能源消耗和碳排放。方法提出一种基于马尔可夫决策过程(Markov decision process,MDP)和Q-learning的绿色边缘计算任务卸载策略,该策略考虑了计算频率、传输功率、碳排放等约束,基于云边端协同计算模型,将碳排放优化问题转化为混合整数线性规划模型,通过MDP和Q-learning求解模型,并对比随机分配算法、Q-learning算法、SARSA(state action reward state action)算法的收敛性能、碳排放与总时延。结果与已有的计算卸载策略相比,新策略对应的任务调度算法收敛比SARSA算法、Q-learning算法分别提高了5%,2%,收敛性更好;系统碳排放成本比Q-learning算法、SARSA算法分别减少了8%,22%;考虑终端数量多少,新策略比Q-learning算法、SARSA算法终端数量分别减少了6%,7%;系统总计算时延上,新策略明显低于其他算法,比随机分配算法、Q-learning算法、SARSA算法分别减少了27%,14%,22%。结论该策略能够合理优化卸载计算任务和资源分配,权衡时延、能耗,减少系统碳排放量。 展开更多
关键词 碳排放 边缘计算 强化学习 马尔可夫决策过程 任务卸载
在线阅读 下载PDF
Multi-QoS routing algorithm based on reinforcement learning for LEO satellite networks 被引量:1
3
作者 ZHANG Yifan DONG Tao +1 位作者 LIU Zhihui JIN Shichao 《Journal of Systems Engineering and Electronics》 2025年第1期37-47,共11页
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa... Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link. 展开更多
关键词 low Earth orbit(LEO)satellite network reinforcement learning multi-quality of service(QoS) routing algorithm
在线阅读 下载PDF
FedCLCC:A personalized federated learning algorithm for edge cloud collaboration based on contrastive learning and conditional computing
4
作者 Kangning Yin Xinhui Ji +1 位作者 Yan Wang Zhiguo Wang 《Defence Technology(防务技术)》 2025年第1期80-93,共14页
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ... Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms. 展开更多
关键词 Federated learning Statistical heterogeneity Personalized model Conditional computing Contrastive learning
在线阅读 下载PDF
Controlling update distance and enhancing fair trainable prototypes in federated learning under data and model heterogeneity
5
作者 Kangning Yin Zhen Ding +1 位作者 Xinhui Ji Zhiguo Wang 《Defence Technology(防务技术)》 2025年第5期15-31,共17页
Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce t... Heterogeneous federated learning(HtFL)has gained significant attention due to its ability to accommodate diverse models and data from distributed combat units.The prototype-based HtFL methods were proposed to reduce the high communication cost of transmitting model parameters.These methods allow for the sharing of only class representatives between heterogeneous clients while maintaining privacy.However,existing prototype learning approaches fail to take the data distribution of clients into consideration,which results in suboptimal global prototype learning and insufficient client model personalization capabilities.To address these issues,we propose a fair trainable prototype federated learning(FedFTP)algorithm,which employs a fair sampling training prototype(FSTP)mechanism and a hyperbolic space constraints(HSC)mechanism to enhance the fairness and effectiveness of prototype learning on the server in heterogeneous environments.Furthermore,a local prototype stable update(LPSU)mechanism is proposed as a means of maintaining personalization while promoting global consistency,based on contrastive learning.Comprehensive experimental results demonstrate that FedFTP achieves state-of-the-art performance in HtFL scenarios. 展开更多
关键词 Heterogeneous federated learning Model heterogeneity Data heterogeneity Contrastive learning
在线阅读 下载PDF
Machine learning models for optimization, validation, and prediction of light emitting diodes with kinetin based basal medium for in vitro regeneration of upland cotton (Gossypium hirsutum L.)
6
作者 ÖZKAT Gözde Yalçın AASIM Muhammad +2 位作者 BAKHSH Allah ALI Seyid Amjad ÖZCAN Sebahattin 《Journal of Cotton Research》 2025年第2期228-241,共14页
Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is inf... Background Plant tissue culture has emerged as a tool for improving cotton propagation and genetics,but recalcitrance nature of cotton makes it difficult to develop in vitro regeneration.Cotton’s recalcitrance is influenced by genotype,explant type,and environmental conditions.To overcome these issues,this study uses different machine learning-based predictive models by employing multiple input factors.Cotyledonary node explants of two commercial cotton cultivars(STN-468 and GSN-12)were isolated from 7–8 days old seedlings,preconditioned with 5,10,and 20 mg·L^(-1) kinetin(KIN)for 10 days.Thereafter,explants were postconditioned on full Murashige and Skoog(MS),1/2MS,1/4MS,and full MS+0.05 mg·L^(-1) KIN,cultured in growth room enlightened with red and blue light-emitting diodes(LED)combination.Statistical analysis(analysis of variance,regression analysis)was employed to assess the impact of different treatments on shoot regeneration,with artificial intelligence(AI)models used for confirming the findings.Results GSN-12 exhibited superior shoot regeneration potential compared with STN-468,with an average of 4.99 shoots per explant versus 3.97.Optimal results were achieved with 5 mg·L^(-1) KIN preconditioning,1/4MS postconditioning,and 80%red LED,with maximum of 7.75 shoot count for GSN-12 under these conditions;while STN-468 reached 6.00 shoots under the conditions of 10 mg·L^(-1) KIN preconditioning,MS with 0.05 mg·L^(-1) KIN(postconditioning)and 75.0%red LED.Rooting was successfully achieved with naphthalene acetic acid and activated charcoal.Additionally,three different powerful AI-based models,namely,extreme gradient boost(XGBoost),random forest(RF),and the artificial neural network-based multilayer perceptron(MLP)regression models validated the findings.Conclusion GSN-12 outperformed STN-468 with optimal results from 5 mg·L^(-1) KIN+1/4MS+80%red LED.Application of machine learning-based prediction models to optimize cotton tissue culture protocols for shoot regeneration is helpful to improve cotton regeneration efficiency. 展开更多
关键词 Machine learning COTTON In vitro regeneration Light emitting diodes OPTIMIZATION KINETIN
在线阅读 下载PDF
Comparative analysis of machine learning and statistical models for cotton yield prediction in major growing districts of Karnataka,India
7
作者 THIMMEGOWDA M.N. MANJUNATHA M.H. +4 位作者 LINGARAJ H. SOUMYA D.V. JAYARAMAIAH R. SATHISHA G.S. NAGESHA L. 《Journal of Cotton Research》 2025年第1期40-60,共21页
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su... Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies. 展开更多
关键词 COTTON Machine learning models Statistical models Yield forecast Artificial neural network Weather variables
在线阅读 下载PDF
融合Q-learning的A^(*)预引导蚁群路径规划算法
8
作者 殷笑天 杨丽英 +1 位作者 刘干 何玉庆 《传感器与微系统》 北大核心 2025年第8期143-147,153,共6页
针对传统蚁群优化(ACO)算法在复杂环境路径规划中存在易陷入局部最优、收敛速度慢及避障能力不足的问题,提出了一种融合Q-learning基于分层信息素机制的A^(*)算法预引导蚁群路径规划算法-QHACO算法。首先,通过A^(*)算法预分配全局信息素... 针对传统蚁群优化(ACO)算法在复杂环境路径规划中存在易陷入局部最优、收敛速度慢及避障能力不足的问题,提出了一种融合Q-learning基于分层信息素机制的A^(*)算法预引导蚁群路径规划算法-QHACO算法。首先,通过A^(*)算法预分配全局信息素,引导初始路径快速逼近最优解;其次,构建全局-局部双层信息素协同模型,利用全局层保留历史精英路径经验、局部层实时响应环境变化;最后,引入Q-learning方向性奖励函数优化决策过程,在路径拐点与障碍边缘施加强化引导信号。实验表明:在25×24中等复杂度地图中,QHACO算法较传统ACO算法最优路径缩短22.7%,收敛速度提升98.7%;在50×50高密度障碍环境中,最优路径长度优化16.9%,迭代次数减少95.1%。相比传统ACO算法,QHACO算法在最优性、收敛速度与避障能力上均有显著提升,展现出较强环境适应性。 展开更多
关键词 蚁群优化算法 路径规划 局部最优 收敛速度 Q-learning 分层信息素 A^(*)算法
在线阅读 下载PDF
Fault-observer-based iterative learning model predictive controller for trajectory tracking of hypersonic vehicles
9
作者 CUI Peng GAO Changsheng AN Ruoming 《Journal of Systems Engineering and Electronics》 2025年第3期803-813,共11页
This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hype... This work proposes the application of an iterative learning model predictive control(ILMPC)approach based on an adaptive fault observer(FOBILMPC)for fault-tolerant control and trajectory tracking in air-breathing hypersonic vehicles.In order to increase the control amount,this online control legislation makes use of model predictive control(MPC)that is based on the concept of iterative learning control(ILC).By using offline data to decrease the linearized model’s faults,the strategy may effectively increase the robustness of the control system and guarantee that disturbances can be suppressed.An adaptive fault observer is created based on the suggested ILMPC approach in order to enhance overall fault tolerance by estimating and compensating for actuator disturbance and fault degree.During the derivation process,a linearized model of longitudinal dynamics is established.The suggested ILMPC approach is likely to be used in the design of hypersonic vehicle control systems since numerical simulations have demonstrated that it can decrease tracking error and speed up convergence when compared to the offline controller. 展开更多
关键词 hypersonic vehicle actuator fault tracking control iterative learning control(ILC) model predictive control(MPC) fault observer
在线阅读 下载PDF
基于Bagging PU-learning和集成学习的云安区滑坡易发性评价
10
作者 周武召 徐峰 +2 位作者 孔润 张高鹏 苗超杰 《遥感信息》 北大核心 2025年第3期1-10,共10页
滑坡易发性建模中的非滑坡样本是未经过实地核查和验证的未标记数据,这为滑坡易发性的空间预测带来挑战,并且常规的机器学习模型难以处理和充分提取滑坡数据中的隐藏信息。为了解决这些问题,文章首先根据云安区概况选择10个滑坡影响因子... 滑坡易发性建模中的非滑坡样本是未经过实地核查和验证的未标记数据,这为滑坡易发性的空间预测带来挑战,并且常规的机器学习模型难以处理和充分提取滑坡数据中的隐藏信息。为了解决这些问题,文章首先根据云安区概况选择10个滑坡影响因子,根据实地采集的滑坡数据,利用Bagging PU-learning进行非滑坡样本的选择;然后,将滑坡和非滑坡样本按7∶3的比例划分为训练集和测试集;最后,使用Stacking集成策略融合RF、GBDT和PLS模型开展云安区滑坡易发性的评价。结果表明,在滑坡样本有限和非滑坡样本未经实地核查的条件下,BPL方法有效提升了非滑坡样本的建模质量。Stacking集成策略的预测精度最好,其总体精度在样本随机选择方法中比RF、GBDT和PLS高出1.264~10.643个百分点,在Bagging PU-learning方法中比RF、GBDT和PLS高出2.221~7.826个百分点。 展开更多
关键词 滑坡 易发性评价 样本选择 集成学习 云安区
在线阅读 下载PDF
改进的自校正Q-learning应用于智能机器人路径规划
11
作者 任伟 朱建鸿 《机械科学与技术》 北大核心 2025年第1期126-132,共7页
为了解决智能机器人路径规划中存在的一些问题,提出了一种改进的自校正Q-learning算法。首先,对其贪婪搜索因子进行了改进,采用动态的搜索因子,对探索和利用之间的关系进行了更好地平衡;其次,在Q值初始化阶段,利用当前位置和目标位置距... 为了解决智能机器人路径规划中存在的一些问题,提出了一种改进的自校正Q-learning算法。首先,对其贪婪搜索因子进行了改进,采用动态的搜索因子,对探索和利用之间的关系进行了更好地平衡;其次,在Q值初始化阶段,利用当前位置和目标位置距离的倒数代替传统的Q-learning算法中的全零或随机初始化,大大加快了收敛速度;最后,针对传统的Q-learning算法中Q函数的最大化偏差,引入自校正估计器来修正最大化偏差。通过仿真实验对提出的改进思路进行了验证,结果表明:改进的算法能够很大程度的提高算法的学习效率,在各个方面相比传统算法都有了较大的提升。 展开更多
关键词 路径规划 Q-learning 贪婪搜索 初始化 自校正
在线阅读 下载PDF
基于Q-learning分布式训练的无人机自组织网络AODV路由协议
12
作者 孙晨 王宇昆 +1 位作者 万家梅 侯亮 《现代电子技术》 北大核心 2025年第15期103-109,共7页
针对无人机自组织网络节点的高度动态性和拓扑稀疏性,现有的结合Q学习的路由协议暴露出Q值更新滞后、难以迅速适应网络拓扑快速变动的问题,文中提出一种基于Q-learning分布式训练的AODV(DQL-AODV)路由协议。该协议中将每个节点视为一个... 针对无人机自组织网络节点的高度动态性和拓扑稀疏性,现有的结合Q学习的路由协议暴露出Q值更新滞后、难以迅速适应网络拓扑快速变动的问题,文中提出一种基于Q-learning分布式训练的AODV(DQL-AODV)路由协议。该协议中将每个节点视为一个智能体,依据分布式训练的Q值对需转发的数据包进行下一跳选择,每个节点的Q值进行局部更新和全局更新。首先,根据节点间链路的寿命和节点负载能力计算局部奖励值,每次Hello消息接收将更稳定的下一跳链路更新为较高的Q值;其次,路由请求消息到达目标节点后将执行一次全局Q值更新,根据数据包的转发跳数和平均端到端延迟计算全局奖励值;最后,结合Q-learning算法优化Hello消息发送机制,有效地平衡网络拓扑感知程度与路由开销。仿真结果证明,相比于QL-AODV,所提方法在平均端到端时延、数据吞吐量、包到达率和路由开销4个网络性能指标总体上分别优化了19.93%、15.48%、6.24%、11.76%,且收敛能力更强,验证了该协议的有效性。 展开更多
关键词 无人机自组网 AODV路由协议 Q-learning分布式训练 链路质量 Hello消息 路由决策
在线阅读 下载PDF
基于改进Q-Learning的移动机器人路径规划算法 被引量:3
13
作者 王立勇 王弘轩 +2 位作者 苏清华 王绅同 张鹏博 《电子测量技术》 北大核心 2024年第9期85-92,共8页
随着移动机器人在生产生活中的深入应用,其路径规划能力也需要向快速性和环境适应性兼备发展。为解决现有移动机器人使用强化学习方法进行路径规划时存在的探索前期容易陷入局部最优、反复搜索同一区域,探索后期收敛率低、收敛速度慢的... 随着移动机器人在生产生活中的深入应用,其路径规划能力也需要向快速性和环境适应性兼备发展。为解决现有移动机器人使用强化学习方法进行路径规划时存在的探索前期容易陷入局部最优、反复搜索同一区域,探索后期收敛率低、收敛速度慢的问题,本研究提出一种改进的Q-Learning算法。该算法改进Q矩阵赋值方法,使迭代前期探索过程具有指向性,并降低碰撞的情况;改进Q矩阵迭代方法,使Q矩阵更新具有前瞻性,避免在一个小区域中反复探索;改进随机探索策略,在迭代前期全面利用环境信息,后期向目标点靠近。在不同栅格地图仿真验证结果表明,本文算法在Q-Learning算法的基础上,通过上述改进降低探索过程中的路径长度、减少抖动并提高收敛的速度,具有更高的计算效率。 展开更多
关键词 路径规划 强化学习 移动机器人 Q-learning算法 ε-decreasing策略
在线阅读 下载PDF
改进Q-Learning的路径规划算法研究 被引量:7
14
作者 宋丽君 周紫瑜 +2 位作者 李云龙 侯佳杰 何星 《小型微型计算机系统》 CSCD 北大核心 2024年第4期823-829,共7页
针对Q-Learning算法学习效率低、收敛速度慢且在动态障碍物的环境下路径规划效果不佳的问题,本文提出一种改进Q-Learning的移动机器人路径规划算法.针对该问题,算法根据概率的突变性引入探索因子来平衡探索和利用以加快学习效率;通过在... 针对Q-Learning算法学习效率低、收敛速度慢且在动态障碍物的环境下路径规划效果不佳的问题,本文提出一种改进Q-Learning的移动机器人路径规划算法.针对该问题,算法根据概率的突变性引入探索因子来平衡探索和利用以加快学习效率;通过在更新函数中设计深度学习因子以保证算法探索概率;融合遗传算法,避免陷入局部路径最优同时按阶段探索最优迭代步长次数,以减少动态地图探索重复率;最后提取输出的最优路径关键节点采用贝塞尔曲线进行平滑处理,进一步保证路径平滑度和可行性.实验通过栅格法构建地图,对比实验结果表明,改进后的算法效率相较于传统算法在迭代次数和路径上均有较大优化,且能够较好的实现动态地图下的路径规划,进一步验证所提方法的有效性和实用性. 展开更多
关键词 移动机器人 路径规划 Q-learning算法 平滑处理 动态避障
在线阅读 下载PDF
基于softmax的加权Double Q-Learning算法 被引量:3
15
作者 钟雨昂 袁伟伟 关东海 《计算机科学》 CSCD 北大核心 2024年第S01期46-50,共5页
强化学习作为机器学习的一个分支,用于描述和解决智能体在与环境的交互过程中,通过学习策略以达成回报最大化的问题。Q-Learning作为无模型强化学习的经典方法,存在过估计引起的最大化偏差问题,并且在环境中奖励存在噪声时表现不佳。Dou... 强化学习作为机器学习的一个分支,用于描述和解决智能体在与环境的交互过程中,通过学习策略以达成回报最大化的问题。Q-Learning作为无模型强化学习的经典方法,存在过估计引起的最大化偏差问题,并且在环境中奖励存在噪声时表现不佳。Double Q-Learning(DQL)的出现解决了过估计问题,但同时造成了低估问题。为解决以上算法的高低估问题,提出了基于softmax的加权Q-Learning算法,并将其与DQL相结合,提出了一种新的基于softmax的加权Double Q-Learning算法(WDQL-Softmax)。该算法基于加权双估计器的构造,对样本期望值进行softmax操作得到权重,使用权重估计动作价值,有效平衡对动作价值的高估和低估问题,使估计值更加接近理论值。实验结果表明,在离散动作空间中,相比于Q-Learning算法、DQL算法和WDQL算法,WDQL-Softmax算法的收敛速度更快且估计值与理论值的误差更小。 展开更多
关键词 强化学习 Q-learning Double Q-learning Softmax
在线阅读 下载PDF
Machine learning for predicting the outcome of terminal ballistics events 被引量:3
16
作者 Shannon Ryan Neeraj Mohan Sushma +4 位作者 Arun Kumar AV Julian Berk Tahrima Hashem Santu Rana Svetha Venkatesh 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第1期14-26,共13页
Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression mode... Machine learning(ML) is well suited for the prediction of high-complexity,high-dimensional problems such as those encountered in terminal ballistics.We evaluate the performance of four popular ML-based regression models,extreme gradient boosting(XGBoost),artificial neural network(ANN),support vector regression(SVR),and Gaussian process regression(GP),on two common terminal ballistics’ problems:(a)predicting the V50ballistic limit of monolithic metallic armour impacted by small and medium calibre projectiles and fragments,and(b) predicting the depth to which a projectile will penetrate a target of semi-infinite thickness.To achieve this we utilise two datasets,each consisting of approximately 1000samples,collated from public release sources.We demonstrate that all four model types provide similarly excellent agreement when interpolating within the training data and diverge when extrapolating outside this range.Although extrapolation is not advisable for ML-based regression models,for applications such as lethality/survivability analysis,such capability is required.To circumvent this,we implement expert knowledge and physics-based models via enforced monotonicity,as a Gaussian prior mean,and through a modified loss function.The physics-informed models demonstrate improved performance over both classical physics-based models and the basic ML regression models,providing an ability to accurately fit experimental data when it is available and then revert to the physics-based model when not.The resulting models demonstrate high levels of predictive accuracy over a very wide range of projectile types,target materials and thicknesses,and impact conditions significantly more diverse than that achievable from any existing analytical approach.Compared with numerical analysis tools such as finite element solvers the ML models run orders of magnitude faster.We provide some general guidelines throughout for the development,application,and reporting of ML models in terminal ballistics problems. 展开更多
关键词 Machine learning Artificial intelligence Physics-informed machine learning Terminal ballistics Armour
在线阅读 下载PDF
Deep Learning Hybrid Model for Lithium-Ion Battery Aging Estimation and Prediction
17
作者 项越 姜波 戴海峰 《同济大学学报(自然科学版)》 EI CAS CSCD 北大核心 2024年第S01期215-222,共8页
The degradation process of lithium-ion batteries is intricately linked to their entire lifecycle as power sources and energy storage devices,encompassing aspects such as performance delivery and cycling utilization.Co... The degradation process of lithium-ion batteries is intricately linked to their entire lifecycle as power sources and energy storage devices,encompassing aspects such as performance delivery and cycling utilization.Consequently,the accurate and expedient estimation or prediction of the aging state of lithium-ion batteries has garnered extensive attention.Nonetheless,prevailing research predominantly concentrates on either aging estimation or prediction,neglecting the dynamic fusion of both facets.This paper proposes a hybrid model for capacity aging estimation and prediction based on deep learning,wherein salient features highly pertinent to aging are extracted from charge and discharge relaxation processes.By amalgamating historical capacity decay data,the model dynamically furnishes estimations of the present capacity and forecasts of future capacity for lithium-ion batteries.Our approach is validated against a novel dataset involving charge and discharge cycles at varying rates.Specifically,under a charging condition of 0.25 C,a mean absolute percentage error(MAPE)of 0.29%is achieved.This outcome underscores the model's adeptness in harnessing relaxation processes commonly encountered in the real world and synergizing with historical capacity records within battery management systems(BMS),thereby affording estimations and prognostications of capacity decline with heightened precision. 展开更多
关键词 lithium-ion battery state of health deep learning relaxation process
在线阅读 下载PDF
改进的Q-learning蜂群算法求解置换流水车间调度问题
18
作者 杜利珍 宣自风 +1 位作者 唐家琦 王鑫涛 《组合机床与自动化加工技术》 北大核心 2024年第10期175-180,共6页
针对置换流水车间调度问题,提出了一种基于改进的Q-learning算法的人工蜂群算法。该算法设计了一种改进的奖励函数作为人工蜂群算法的环境,根据奖励函数的优劣来判断下一代种群的寻优策略,并通过Q-learning智能选择人工蜂群算法的蜜源... 针对置换流水车间调度问题,提出了一种基于改进的Q-learning算法的人工蜂群算法。该算法设计了一种改进的奖励函数作为人工蜂群算法的环境,根据奖励函数的优劣来判断下一代种群的寻优策略,并通过Q-learning智能选择人工蜂群算法的蜜源的更新维度数大小,根据选择的维度数大小对编码进行更新,提高了收敛速度和精度,最后使用不同规模的置换流水车间调度问题的实例来验证所提算法的性能,通过对标准实例的计算与其它算法对比,证明该算法的准确性。 展开更多
关键词 Q-learning算法 人工蜂群算法 置换流水车间调度
在线阅读 下载PDF
基于Q-Learning的分簇无线传感网信任管理机制 被引量:3
19
作者 赵远亮 王涛 +3 位作者 李平 吴雅婷 孙彦赞 王瑞 《上海大学学报(自然科学版)》 CAS CSCD 北大核心 2024年第2期255-266,共12页
针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能... 针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能量信任3个方面.在网络运行过程中,基于节点的通信行为、数据分布和能量消耗,使用Q-Learning算法更新节点信任值,并选择簇内信任值最高的节点作为可信簇头节点.当簇中主簇头节点的信任值低于阈值时,可信簇头节点代替主簇头节点管理簇内成员节点,维护正常的数据传输.研究结果表明,QLTMM-CWSN机制能有效抵御通信攻击、伪造本地数据攻击、能量攻击和混合攻击. 展开更多
关键词 无线传感器网络 Q-learning 信任管理机制 网络安全
在线阅读 下载PDF
Research on Transfer Learning in Surface Defect Detection of Printed Products 被引量:1
20
作者 ZHU Xin-yu SI Zhan-jun CHEN Zhi-yu 《印刷与数字媒体技术研究》 CAS 北大核心 2024年第6期38-44,共7页
To advance the printing manufacturing industry towards intelligence and address the challenges faced by supervised learning,such as the high workload,cost,poor generalization,and labeling issues,an unsupervised and tr... To advance the printing manufacturing industry towards intelligence and address the challenges faced by supervised learning,such as the high workload,cost,poor generalization,and labeling issues,an unsupervised and transfer learning-based method for printing defect detection was proposed in this study.This method enabled defect detection in printed surface without the need for extensive labeled defect.The ResNet101-SSTU model was used in this study.On the public dataset of printing defect images,the ResNet101-SSTU model not only achieves comparable performance and speed to mainstream supervised learning detection models but also successfully addresses some of the detection challenges encountered in supervised learning.The proposed ResNet101-SSTU model effectively eliminates the need for extensive defect samples and labeled data in training,providing an efficient solution for quality inspection in the printing industry. 展开更多
关键词 Transfer learning UNSUPERVISED Defect detection PRINTING
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部