期刊文献+
共找到442篇文章
< 1 2 23 >
每页显示 20 50 100
Multi-QoS routing algorithm based on reinforcement learning for LEO satellite networks 被引量:1
1
作者 ZHANG Yifan DONG Tao +1 位作者 LIU Zhihui JIN Shichao 《Journal of Systems Engineering and Electronics》 2025年第1期37-47,共11页
Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa... Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link. 展开更多
关键词 low Earth orbit(LEO)satellite network reinforcement learning multi-quality of service(QoS) routing algorithm
在线阅读 下载PDF
Tactical reward shaping for large-scale combat by multi-agent reinforcement learning
2
作者 DUO Nanxun WANG Qinzhao +1 位作者 LYU Qiang WANG Wei 《Journal of Systems Engineering and Electronics》 CSCD 2024年第6期1516-1529,共14页
Future unmanned battles desperately require intelli-gent combat policies,and multi-agent reinforcement learning offers a promising solution.However,due to the complexity of combat operations and large size of the comb... Future unmanned battles desperately require intelli-gent combat policies,and multi-agent reinforcement learning offers a promising solution.However,due to the complexity of combat operations and large size of the combat group,this task suffers from credit assignment problem more than other rein-forcement learning tasks.This study uses reward shaping to relieve the credit assignment problem and improve policy train-ing for the new generation of large-scale unmanned combat operations.We first prove that multiple reward shaping func-tions would not change the Nash Equilibrium in stochastic games,providing theoretical support for their use.According to the characteristics of combat operations,we propose tactical reward shaping(TRS)that comprises maneuver shaping advice and threat assessment-based attack shaping advice.Then,we investigate the effects of different types and combinations of shaping advice on combat policies through experiments.The results show that TRS improves both the efficiency and attack accuracy of combat policies,with the combination of maneuver reward shaping advice and ally-focused attack shaping advice achieving the best performance compared with that of the base-line strategy. 展开更多
关键词 deep reinforcement learning multi-agent reinforce-ment learning multi-agent combat unmanned battle reward shaping
在线阅读 下载PDF
A single-task and multi-decision evolutionary game model based on multi-agent reinforcement learning 被引量:3
3
作者 MA Ye CHANG Tianqing FAN Wenhui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2021年第3期642-657,共16页
In the evolutionary game of the same task for groups,the changes in game rules,personal interests,the crowd size,and external supervision cause uncertain effects on individual decision-making and game results.In the M... In the evolutionary game of the same task for groups,the changes in game rules,personal interests,the crowd size,and external supervision cause uncertain effects on individual decision-making and game results.In the Markov decision framework,a single-task multi-decision evolutionary game model based on multi-agent reinforcement learning is proposed to explore the evolutionary rules in the process of a game.The model can improve the result of a evolutionary game and facilitate the completion of the task.First,based on the multi-agent theory,to solve the existing problems in the original model,a negative feedback tax penalty mechanism is proposed to guide the strategy selection of individuals in the group.In addition,in order to evaluate the evolutionary game results of the group in the model,a calculation method of the group intelligence level is defined.Secondly,the Q-learning algorithm is used to improve the guiding effect of the negative feedback tax penalty mechanism.In the model,the selection strategy of the Q-learning algorithm is improved and a bounded rationality evolutionary game strategy is proposed based on the rule of evolutionary games and the consideration of the bounded rationality of individuals.Finally,simulation results show that the proposed model can effectively guide individuals to choose cooperation strategies which are beneficial to task completion and stability under different negative feedback factor values and different group sizes,so as to improve the group intelligence level. 展开更多
关键词 multi-agent reinforcement learning evolutionary game Q-learning
在线阅读 下载PDF
Knowledge transfer in multi-agent reinforcement learning with incremental number of agents 被引量:4
4
作者 LIU Wenzhang DONG Lu +1 位作者 LIU Jian SUN Changyin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2022年第2期447-460,共14页
In this paper, the reinforcement learning method for cooperative multi-agent systems(MAS) with incremental number of agents is studied. The existing multi-agent reinforcement learning approaches deal with the MAS with... In this paper, the reinforcement learning method for cooperative multi-agent systems(MAS) with incremental number of agents is studied. The existing multi-agent reinforcement learning approaches deal with the MAS with a specific number of agents, and can learn well-performed policies. However, if there is an increasing number of agents, the previously learned in may not perform well in the current scenario. The new agents need to learn from scratch to find optimal policies with others,which may slow down the learning speed of the whole team. To solve that problem, in this paper, we propose a new algorithm to take full advantage of the historical knowledge which was learned before, and transfer it from the previous agents to the new agents. Since the previous agents have been trained well in the source environment, they are treated as teacher agents in the target environment. Correspondingly, the new agents are called student agents. To enable the student agents to learn from the teacher agents, we first modify the input nodes of the networks for teacher agents to adapt to the current environment. Then, the teacher agents take the observations of the student agents as input, and output the advised actions and values as supervising information. Finally, the student agents combine the reward from the environment and the supervising information from the teacher agents, and learn the optimal policies with modified loss functions. By taking full advantage of the knowledge of teacher agents, the search space for the student agents will be reduced significantly, which can accelerate the learning speed of the holistic system. The proposed algorithm is verified in some multi-agent simulation environments, and its efficiency has been demonstrated by the experiment results. 展开更多
关键词 knowledge transfer multi-agent reinforcement learning(MARL) new agents
在线阅读 下载PDF
Cooperative multi-target hunting by unmanned surface vehicles based on multi-agent reinforcement learning 被引量:2
5
作者 Jiawei Xia Yasong Luo +3 位作者 Zhikun Liu Yalun Zhang Haoran Shi Zhong Liu 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2023年第11期80-94,共15页
To solve the problem of multi-target hunting by an unmanned surface vehicle(USV)fleet,a hunting algorithm based on multi-agent reinforcement learning is proposed.Firstly,the hunting environment and kinematic model wit... To solve the problem of multi-target hunting by an unmanned surface vehicle(USV)fleet,a hunting algorithm based on multi-agent reinforcement learning is proposed.Firstly,the hunting environment and kinematic model without boundary constraints are built,and the criteria for successful target capture are given.Then,the cooperative hunting problem of a USV fleet is modeled as a decentralized partially observable Markov decision process(Dec-POMDP),and a distributed partially observable multitarget hunting Proximal Policy Optimization(DPOMH-PPO)algorithm applicable to USVs is proposed.In addition,an observation model,a reward function and the action space applicable to multi-target hunting tasks are designed.To deal with the dynamic change of observational feature dimension input by partially observable systems,a feature embedding block is proposed.By combining the two feature compression methods of column-wise max pooling(CMP)and column-wise average-pooling(CAP),observational feature encoding is established.Finally,the centralized training and decentralized execution framework is adopted to complete the training of hunting strategy.Each USV in the fleet shares the same policy and perform actions independently.Simulation experiments have verified the effectiveness of the DPOMH-PPO algorithm in the test scenarios with different numbers of USVs.Moreover,the advantages of the proposed model are comprehensively analyzed from the aspects of algorithm performance,migration effect in task scenarios and self-organization capability after being damaged,the potential deployment and application of DPOMH-PPO in the real environment is verified. 展开更多
关键词 Unmanned surface vehicles multi-agent deep reinforcement learning Cooperative hunting Feature embedding Proximal policy optimization
在线阅读 下载PDF
UAV Frequency-based Crowdsensing Using Grouping Multi-agent Deep Reinforcement Learning
6
作者 Cui ZHANG En WANG +2 位作者 Funing YANG Yong jian YANG Nan JIANG 《计算机科学》 CSCD 北大核心 2023年第2期57-68,共12页
Mobile CrowdSensing(MCS)is a promising sensing paradigm that recruits users to cooperatively perform sensing tasks.Recently,unmanned aerial vehicles(UAVs)as the powerful sensing devices are used to replace user partic... Mobile CrowdSensing(MCS)is a promising sensing paradigm that recruits users to cooperatively perform sensing tasks.Recently,unmanned aerial vehicles(UAVs)as the powerful sensing devices are used to replace user participation and carry out some special tasks,such as epidemic monitoring and earthquakes rescue.In this paper,we focus on scheduling UAVs to sense the task Point-of-Interests(PoIs)with different frequency coverage requirements.To accomplish the sensing task,the scheduling strategy needs to consider the coverage requirement,geographic fairness and energy charging simultaneously.We consider the complex interaction among UAVs and propose a grouping multi-agent deep reinforcement learning approach(G-MADDPG)to schedule UAVs distributively.G-MADDPG groups all UAVs into some teams by a distance-based clustering algorithm(DCA),then it regards each team as an agent.In this way,G-MADDPG solves the problem that the training time of traditional MADDPG is too long to converge when the number of UAVs is large,and the trade-off between training time and result accuracy could be controlled flexibly by adjusting the number of teams.Extensive simulation results show that our scheduling strategy has better performance compared with three baselines and is flexible in balancing training time and result accuracy. 展开更多
关键词 UAV Crowdsensing Frequency coverage Grouping multi-agent deep reinforcement learning
在线阅读 下载PDF
Deep reinforcement learning for UAV swarm rendezvous behavior 被引量:2
7
作者 ZHANG Yaozhong LI Yike +1 位作者 WU Zhuoran XU Jialin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第2期360-373,共14页
The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the mai... The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%. 展开更多
关键词 double deep Q network(DDQN)algorithms unmanned aerial vehicle(UAV)swarm task decision deep reinforcement learning(DRL) sparse returns
在线阅读 下载PDF
Targeted multi-agent communication algorithm based on state control
8
作者 Li-yang Zhao Tian-qing Chang +3 位作者 Lei Zhang Jie Zhang Kai-xuan Chu De-peng Kong 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第1期544-556,共13页
As an important mechanism in multi-agent interaction,communication can make agents form complex team relationships rather than constitute a simple set of multiple independent agents.However,the existing communication ... As an important mechanism in multi-agent interaction,communication can make agents form complex team relationships rather than constitute a simple set of multiple independent agents.However,the existing communication schemes can bring much timing redundancy and irrelevant messages,which seriously affects their practical application.To solve this problem,this paper proposes a targeted multiagent communication algorithm based on state control(SCTC).The SCTC uses a gating mechanism based on state control to reduce the timing redundancy of communication between agents and determines the interaction relationship between agents and the importance weight of a communication message through a series connection of hard-and self-attention mechanisms,realizing targeted communication message processing.In addition,by minimizing the difference between the fusion message generated from a real communication message of each agent and a fusion message generated from the buffered message,the correctness of the final action choice of the agent is ensured.Our evaluation using a challenging set of Star Craft II benchmarks indicates that the SCTC can significantly improve the learning performance and reduce the communication overhead between agents,thus ensuring better cooperation between agents. 展开更多
关键词 multi-agent deep reinforcement learning State control Targeted interaction Communication mechanism
在线阅读 下载PDF
A UAV collaborative defense scheme driven by DDPG algorithm 被引量:3
9
作者 ZHANG Yaozhong WU Zhuoran +1 位作者 XIONG Zhenkai CHEN Long 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2023年第5期1211-1224,共14页
The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents ... The deep deterministic policy gradient(DDPG)algo-rithm is an off-policy method that combines two mainstream reinforcement learning methods based on value iteration and policy iteration.Using the DDPG algorithm,agents can explore and summarize the environment to achieve autonomous deci-sions in the continuous state space and action space.In this paper,a cooperative defense with DDPG via swarms of unmanned aerial vehicle(UAV)is developed and validated,which has shown promising practical value in the effect of defending.We solve the sparse rewards problem of reinforcement learning pair in a long-term task by building the reward function of UAV swarms and optimizing the learning process of artificial neural network based on the DDPG algorithm to reduce the vibration in the learning process.The experimental results show that the DDPG algorithm can guide the UAVs swarm to perform the defense task efficiently,meeting the requirements of a UAV swarm for non-centralization,autonomy,and promoting the intelligent development of UAVs swarm as well as the decision-making process. 展开更多
关键词 deep deterministic policy gradient(DDPG)algorithm unmanned aerial vehicles(UAVs)swarm task decision making deep reinforcement learning sparse reward problem
在线阅读 下载PDF
基于改进Q-Learning的移动机器人路径规划算法 被引量:3
10
作者 王立勇 王弘轩 +2 位作者 苏清华 王绅同 张鹏博 《电子测量技术》 北大核心 2024年第9期85-92,共8页
随着移动机器人在生产生活中的深入应用,其路径规划能力也需要向快速性和环境适应性兼备发展。为解决现有移动机器人使用强化学习方法进行路径规划时存在的探索前期容易陷入局部最优、反复搜索同一区域,探索后期收敛率低、收敛速度慢的... 随着移动机器人在生产生活中的深入应用,其路径规划能力也需要向快速性和环境适应性兼备发展。为解决现有移动机器人使用强化学习方法进行路径规划时存在的探索前期容易陷入局部最优、反复搜索同一区域,探索后期收敛率低、收敛速度慢的问题,本研究提出一种改进的Q-Learning算法。该算法改进Q矩阵赋值方法,使迭代前期探索过程具有指向性,并降低碰撞的情况;改进Q矩阵迭代方法,使Q矩阵更新具有前瞻性,避免在一个小区域中反复探索;改进随机探索策略,在迭代前期全面利用环境信息,后期向目标点靠近。在不同栅格地图仿真验证结果表明,本文算法在Q-Learning算法的基础上,通过上述改进降低探索过程中的路径长度、减少抖动并提高收敛的速度,具有更高的计算效率。 展开更多
关键词 路径规划 强化学习 移动机器人 Q-learning算法 ε-decreasing策略
在线阅读 下载PDF
Q-learning算法及其在囚徒困境问题中的实现 被引量:7
11
作者 张春阳 陈小平 +1 位作者 刘贵全 蔡庆生 《计算机工程与应用》 CSCD 北大核心 2001年第13期121-122,128,共3页
Q-learning是一种优良的强化学习算法。该文首先阐述了Q-learning的基本学习机制,然后以囚徒困境问题为背景,分析、对比T Q-learning算法与TFT算法,验证了 Q-learning算法的优良特性。
关键词 机器学习 强化学习 Q-learning算法 囚徒困境问题 人工智能
在线阅读 下载PDF
离散四水库问题基准下基于n步Q-learning的水库群优化调度 被引量:5
12
作者 胡鹤轩 钱泽宇 +1 位作者 胡强 张晔 《中国水利水电科学研究院学报(中英文)》 北大核心 2023年第2期138-147,共10页
水库优化调度问题是一个具有马尔可夫性的优化问题。强化学习是目前解决马尔可夫决策过程问题的研究热点,其在解决单个水库优化调度问题上表现优异,但水库群系统的复杂性为强化学习的应用带来困难。针对复杂的水库群优化调度问题,提出... 水库优化调度问题是一个具有马尔可夫性的优化问题。强化学习是目前解决马尔可夫决策过程问题的研究热点,其在解决单个水库优化调度问题上表现优异,但水库群系统的复杂性为强化学习的应用带来困难。针对复杂的水库群优化调度问题,提出一种离散四水库问题基准下基于n步Q-learning的水库群优化调度方法。该算法基于n步Q-learning算法,对离散四水库问题基准构建一种水库群优化调度的强化学习模型,通过探索经验优化,最终生成水库群最优调度方案。试验分析结果表明,当有足够的探索经验进行学习时,结合惩罚函数的一步Q-learning算法能够达到理论上的最优解。用可行方向法取代惩罚函数实现约束,依据离散四水库问题基准约束建立时刻可行状态表和时刻状态可选动作哈希表,有效的对状态动作空间进行降维,使算法大幅度缩短优化时间。不同的探索策略决定探索经验的有效性,从而决定优化效率,尤其对于复杂的水库群优化调度问题,提出了一种改进的ε-greedy策略,并与传统的ε-greedy、置信区间上限UCB、Boltzmann探索三种策略进行对比,验证了其有效性,在其基础上引入n步回报改进为n步Q-learning,确定合适的n步和学习率等超参数,进一步改进算法优化效率。 展开更多
关键词 水库优化调度 强化学习 Q学习 惩罚函数 可行方向法
在线阅读 下载PDF
基于n步Q-learning算法的风电抽水蓄能联合系统日随机优化调度研究 被引量:7
13
作者 李文武 马浩云 +1 位作者 贺中豪 徐康 《水电能源科学》 北大核心 2022年第1期206-210,共5页
针对Q-learning算法求解风电抽蓄联合系统日随机优化调度中,存在功率偏差大及收敛速度慢的问题,提出基于n步Q-learning算法的风电抽蓄日随机优化调度方法。先将风电出力随机过程视为Markov过程并建立风电抽蓄日随机优化调度模型;其次分... 针对Q-learning算法求解风电抽蓄联合系统日随机优化调度中,存在功率偏差大及收敛速度慢的问题,提出基于n步Q-learning算法的风电抽蓄日随机优化调度方法。先将风电出力随机过程视为Markov过程并建立风电抽蓄日随机优化调度模型;其次分析n步Q-learning算法应用于优化调度模型中的优势;最后按照应用流程求解优化调度模型。算例表明,n步Q-learning算法的优化结果与n步和学习率取值有关,当两个参数取值适中时能得到最优功率偏差结果,在求解该问题上对比n步Q-learning与Q-learning算法,前者能更快收敛且较后者功率偏差降低7.4%、求解时间降低10.4%,验证了n步Q-learning算法的求解优越性。 展开更多
关键词 风蓄随机优化调度 强化学习 Q-learning算法 n步自举法
在线阅读 下载PDF
流量拥堵空域内一种基于Q-Learning算法的改航路径规划 被引量:2
14
作者 向征 何雨阳 全志伟 《科学技术与工程》 北大核心 2022年第32期14494-14501,共8页
目前,空中流量激增导致空域资源紧张的问题越发凸显,为了缓解这一现状,基于流量管理层面对航空器进行改航路径的研究。首先采用栅格化的方式对空域环境进行离散化处理,根据航路点流量的拥挤程度把空域划分为3种不同类型的栅格区域。其... 目前,空中流量激增导致空域资源紧张的问题越发凸显,为了缓解这一现状,基于流量管理层面对航空器进行改航路径的研究。首先采用栅格化的方式对空域环境进行离散化处理,根据航路点流量的拥挤程度把空域划分为3种不同类型的栅格区域。其次通过改进强化学习中马尔科夫决策过程的奖励函数对其进行建模,并基于ε-greedy策略运用Q-Learning算法对该模型进行迭代求解,对相应的参数取值进行探究比较以提高结果的可适用性。最后经过仿真运行,计算出不同参数赋值下的最优路径及相应的性能指标。结果表明:应用该模型和算法可以针对某一时段内的流量拥堵空域搜索出合适的改航路径,使飞机避开流量拥挤的航路点,缩短空中延误时间,有效改善空域拥堵的现况。 展开更多
关键词 改航路径规划 流量拥堵 强化学习 马尔科夫决策过程 Q-learning算法 ε-greedy策略
在线阅读 下载PDF
Q-learning强化学习制导律 被引量:30
15
作者 张秦浩 敖百强 张秦雪 《系统工程与电子技术》 EI CSCD 北大核心 2020年第2期414-419,共6页
在未来的战场中,智能导弹将成为精确有效的打击武器,导弹智能化已成为一种主要的发展趋势。本文以传统的比例制导律为基础,提出基于强化学习的变比例系数制导算法。该算法以视线转率作为状态,依据脱靶量设计奖励函数,并设计离散化的行... 在未来的战场中,智能导弹将成为精确有效的打击武器,导弹智能化已成为一种主要的发展趋势。本文以传统的比例制导律为基础,提出基于强化学习的变比例系数制导算法。该算法以视线转率作为状态,依据脱靶量设计奖励函数,并设计离散化的行为空间,为导弹选择正确的制导指令。实验仿真验证了所提算法比传统的比例制导律拥有更好的制导精度,并使导弹拥有了自主决策能力。 展开更多
关键词 比例制导 制导律 脱靶量 机动目标 强化学习 Q学习 时序差分算法
在线阅读 下载PDF
基于Q-learning的弹道优化研究 被引量:2
16
作者 周毅昕 程可涛 +2 位作者 柳立敏 何贤军 黄振贵 《兵器装备工程学报》 CSCD 北大核心 2022年第5期191-196,共6页
为提升弹道优化效率,缩短作战响应时间,提出了一种基于Q-learning算法的简控弹道优化方法。首先在竖直平面内以3自由度(DOF)只受重力和空气阻力的质点弹丸为研究对象,建立无控弹道方程组作为参考模型并用龙格库塔法求解。在此基础上分... 为提升弹道优化效率,缩短作战响应时间,提出了一种基于Q-learning算法的简控弹道优化方法。首先在竖直平面内以3自由度(DOF)只受重力和空气阻力的质点弹丸为研究对象,建立无控弹道方程组作为参考模型并用龙格库塔法求解。在此基础上分别以最远飞行距离和最大落点速度为目标,以加速度指令直接控制输出,建立有控弹道优化模型。在设定初速度与出射角的情况下,在弹丸的外弹道飞行过程利用Q-learning算法输出控制指令,通过强化学习迭代计算实现弹道优化目标。仿真模拟结果证明,在强化学习控制下的导弹射程比无控时明显增加,表明所提出的优化设计方法可有效优化弹道,且效率高。 展开更多
关键词 弹道优化 强化学习 Q-learning算法 外弹道
在线阅读 下载PDF
Q-learning算法优化的SVDPP推荐算法 被引量:4
17
作者 周运腾 张雪英 +3 位作者 李凤莲 刘书昌 焦江丽 田豆 《计算机工程》 CAS CSCD 北大核心 2021年第2期46-51,共6页
为进一步改善个性化推荐系统的推荐效果,通过使用强化学习方法对SVDPP算法进行优化,提出一种新的协同过滤推荐算法。考虑用户评分的时间效应,将推荐问题转化为马尔科夫决策过程。在此基础上,利用Q-learning算法构建融合时间戳信息的用... 为进一步改善个性化推荐系统的推荐效果,通过使用强化学习方法对SVDPP算法进行优化,提出一种新的协同过滤推荐算法。考虑用户评分的时间效应,将推荐问题转化为马尔科夫决策过程。在此基础上,利用Q-learning算法构建融合时间戳信息的用户评分优化模型,同时通过预测评分取整填充和优化边界补全方法预测缺失值,以解决数据稀疏性问题。实验结果显示,该算法的均方根误差较SVDPP算法降低了0.005 6,表明融合时间戳并采用强化学习方法进行推荐性能优化是可行的。 展开更多
关键词 协同过滤 奇异值分解 强化学习 马尔科夫决策过程 Q-learning算法
在线阅读 下载PDF
基于Q-Learning的自动入侵响应决策方法 被引量:4
18
作者 刘璟 张玉臣 张红旗 《信息网络安全》 CSCD 北大核心 2021年第6期26-35,共10页
针对现有自动入侵响应决策自适应性差的问题,文章提出一种基于Q-Learning的自动入侵响应决策方法——Q-AIRD。Q-AIRD基于攻击图对网络攻防中的状态和动作进行形式化描述,通过引入攻击模式层识别不同能力的攻击者,从而做出有针对性的响... 针对现有自动入侵响应决策自适应性差的问题,文章提出一种基于Q-Learning的自动入侵响应决策方法——Q-AIRD。Q-AIRD基于攻击图对网络攻防中的状态和动作进行形式化描述,通过引入攻击模式层识别不同能力的攻击者,从而做出有针对性的响应动作;针对入侵响应的特点,采用Softmax算法并通过引入安全阈值θ、稳定奖励因子μ和惩罚因子ν进行响应策略的选取;基于投票机制实现对策略的多响应目的评估,满足多响应目的的需求,在此基础上设计了基于Q-Learning的自动入侵响应决策算法。仿真实验表明,Q-AIRD具有很好的自适应性,能够实现及时、有效的入侵响应决策。 展开更多
关键词 强化学习 自动入侵响应 Softmax算法 多目标决策
在线阅读 下载PDF
改进Q-learning算法的柔性上料系统研究 被引量:1
19
作者 丁慧琴 曹雏清 +1 位作者 徐昌军 李龙 《现代制造工程》 CSCD 北大核心 2023年第4期87-92,129,共7页
现有振动式给料柔性上料系统的工作效果多数依靠出厂前人工手动调试,手动振动参数调试存在耗时长、费人力问题,且调试结果有较强针对性,导致系统柔性能力不足。提出一种基于改进Q-learning算法的柔性上料系统,结合柔性上料系统自身特性... 现有振动式给料柔性上料系统的工作效果多数依靠出厂前人工手动调试,手动振动参数调试存在耗时长、费人力问题,且调试结果有较强针对性,导致系统柔性能力不足。提出一种基于改进Q-learning算法的柔性上料系统,结合柔性上料系统自身特性对传统Q-learning算法的奖励函数和ε-贪婪策略进行改进,令柔性上料系统自学习寻找一组有较优料件振动效果的振动参数。由实验可得,所提算法能减少人工在柔性上料系统调试上的参与度,证明了该算法在柔性上料系统中的可行性和有效性,且与传统Q-learning算法相比较,改进Q-learning算法更符合柔性上料系统的实际应用。 展开更多
关键词 强化学习 Q-learning算法 柔性上料系统
在线阅读 下载PDF
深度强化学习求解动态柔性作业车间调度问题 被引量:1
20
作者 杨丹 舒先涛 +3 位作者 余震 鲁光涛 纪松霖 王家兵 《现代制造工程》 北大核心 2025年第2期10-16,共7页
随着智慧车间等智能制造技术的不断发展,人工智能算法在解决车间调度问题上的研究备受关注,其中车间运行过程中的动态事件是影响调度效果的一个重要扰动因素,为此提出一种采用深度强化学习方法来解决含有工件随机抵达的动态柔性作业车... 随着智慧车间等智能制造技术的不断发展,人工智能算法在解决车间调度问题上的研究备受关注,其中车间运行过程中的动态事件是影响调度效果的一个重要扰动因素,为此提出一种采用深度强化学习方法来解决含有工件随机抵达的动态柔性作业车间调度问题。首先以最小化总延迟为目标建立动态柔性作业车间的数学模型,然后提取8个车间状态特征,建立6个复合型调度规则,采用ε-greedy动作选择策略并对奖励函数进行设计,最后利用先进的D3QN算法进行求解并在不同规模车间算例上进行了有效性验证。结果表明,提出的D3QN算法能非常有效地解决含有工件随机抵达的动态柔性作业车间调度问题,在所有车间算例中的求优胜率为58.3%,相较于传统的DQN和DDQN算法车间延迟分别降低了11.0%和15.4%,进一步提升车间的生产制造效率。 展开更多
关键词 深度强化学习 D3QN算法 工件随机抵达 柔性作业车间调度 动态调度
在线阅读 下载PDF
上一页 1 2 23 下一页 到第
使用帮助 返回顶部