To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles(UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method ba...To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles(UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method based on an improved deep reinforcement learning(DRL) algorithm: the multistep double deep Q-network(MS-DDQN) algorithm. First, a six-degree-of-freedom UCAV model based on an aircraft control system is established on a simulation platform, and the situation assessment functions of the UCAV and its target are established by considering their angles, altitudes, environments, missile attack performances, and UCAV performance. By controlling the flight path angle, roll angle, and flight velocity, 27 common basic actions are designed. On this basis, aiming to overcome the defects of traditional DRL in terms of training speed and convergence speed, the improved MS-DDQN method is introduced to incorporate the final return value into the previous steps. Finally, the pre-training learning model is used as the starting point for the second learning model to simulate the UCAV aerial combat decision-making process based on the basic training method, which helps to shorten the training time and improve the learning efficiency. The improved DRL algorithm significantly accelerates the training speed and estimates the target value more accurately during training, and it can be applied to aerial combat decision-making.展开更多
The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the mai...The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%.展开更多
针对无人机(UAV)空战环境信息复杂、对抗性强所导致的敌机机动策略难以预测,以及作战胜率不高的问题,设计了一种引导Minimax-DDQN(Minimax-Double Deep Q-Network)算法。首先,在Minimax决策方法的基础上提出了一种引导式策略探索机制;然...针对无人机(UAV)空战环境信息复杂、对抗性强所导致的敌机机动策略难以预测,以及作战胜率不高的问题,设计了一种引导Minimax-DDQN(Minimax-Double Deep Q-Network)算法。首先,在Minimax决策方法的基础上提出了一种引导式策略探索机制;然后,结合引导Minimax策略,以提升Q网络更新效率为出发点设计了一种DDQN(Double Deep Q-Network)算法;最后,提出进阶式三阶段的网络训练方法,通过不同决策模型间的对抗训练,获取更为优化的决策模型。实验结果表明,相较于Minimax-DQN(Minimax-DQN)、Minimax-DDQN等算法,所提算法追击直线目标的成功率提升了14%~60%,并且与DDQN算法的对抗胜率不低于60%。可见,与DDQN、Minimax-DDQN等算法相比,所提算法在高对抗的作战环境中具有更强的决策能力,适应性更好。展开更多
为解决传统的深度Q网络模型下机器人探索复杂未知环境时收敛速度慢的问题,提出了基于竞争网络结构的改进深度双Q网络方法(Improved Dueling Deep Double Q-Network,IDDDQN)。移动机器人通过改进的DDQN网络结构对其三个动作的值函数进行...为解决传统的深度Q网络模型下机器人探索复杂未知环境时收敛速度慢的问题,提出了基于竞争网络结构的改进深度双Q网络方法(Improved Dueling Deep Double Q-Network,IDDDQN)。移动机器人通过改进的DDQN网络结构对其三个动作的值函数进行估计,并更新网络参数,通过训练网络得到相应的Q值。移动机器人采用玻尔兹曼分布与ε-greedy相结合的探索策略,选择一个最优动作,到达下一个观察。机器人将通过学习收集到的数据采用改进的重采样优选机制存储到缓存记忆单元中,并利用小批量数据训练网络。实验结果显示,与基本DDQN算法比,IDDDQN训练的机器人能够更快地适应未知环境,网络的收敛速度也得到提高,到达目标点的成功率增加了3倍多,在未知的复杂环境中可以更好地获取最优路径。展开更多
股票市场具有变化快、干扰因素多、周期数据不足等特点,股票交易是一种不完全信息下的博弈过程,单目标的监督学习模型很难处理这类序列化决策问题。强化学习是解决该类问题的有效途径之一。提出了基于深度强化学习的智能股市操盘手模型I...股票市场具有变化快、干扰因素多、周期数据不足等特点,股票交易是一种不完全信息下的博弈过程,单目标的监督学习模型很难处理这类序列化决策问题。强化学习是解决该类问题的有效途径之一。提出了基于深度强化学习的智能股市操盘手模型ISTG(Intelligent Stock Trader and Gym),融合历史行情数据、技术指标、宏观经济指标等多数据类型,分析评判标准和优秀控制策略,加工长周期数据,实现可增量扩展不同类型数据的复盘模型,自动计算回报标签,训练智能操盘手,并提出直接利用行情数据计算单步确定性动作值的方法。采用中国股市1400多支的有10年以上数据的股票进行多种对比实验,ISTG的总体收益达到13%,优于买入持有总体−7%的表现。展开更多
基金supported by the National Natural Science Foundation of China (No. 61573286)the Aeronautical Science Foundation of China (No. 20180753006)+2 种基金the Fundamental Research Funds for the Central Universities (3102019ZDHKY07)the Natural Science Foundation of Shaanxi Province (2019JM-163, 2020JQ-218)the Shaanxi Province Key Laboratory of Flight Control and Simulation Technology。
文摘To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles(UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method based on an improved deep reinforcement learning(DRL) algorithm: the multistep double deep Q-network(MS-DDQN) algorithm. First, a six-degree-of-freedom UCAV model based on an aircraft control system is established on a simulation platform, and the situation assessment functions of the UCAV and its target are established by considering their angles, altitudes, environments, missile attack performances, and UCAV performance. By controlling the flight path angle, roll angle, and flight velocity, 27 common basic actions are designed. On this basis, aiming to overcome the defects of traditional DRL in terms of training speed and convergence speed, the improved MS-DDQN method is introduced to incorporate the final return value into the previous steps. Finally, the pre-training learning model is used as the starting point for the second learning model to simulate the UCAV aerial combat decision-making process based on the basic training method, which helps to shorten the training time and improve the learning efficiency. The improved DRL algorithm significantly accelerates the training speed and estimates the target value more accurately during training, and it can be applied to aerial combat decision-making.
基金supported by the Aeronautical Science Foundation(2017ZC53033).
文摘The unmanned aerial vehicle(UAV)swarm technology is one of the research hotspots in recent years.With the continuous improvement of autonomous intelligence of UAV,the swarm technology of UAV will become one of the main trends of UAV development in the future.This paper studies the behavior decision-making process of UAV swarm rendezvous task based on the double deep Q network(DDQN)algorithm.We design a guided reward function to effectively solve the problem of algorithm convergence caused by the sparse return problem in deep reinforcement learning(DRL)for the long period task.We also propose the concept of temporary storage area,optimizing the memory playback unit of the traditional DDQN algorithm,improving the convergence speed of the algorithm,and speeding up the training process of the algorithm.Different from traditional task environment,this paper establishes a continuous state-space task environment model to improve the authentication process of UAV task environment.Based on the DDQN algorithm,the collaborative tasks of UAV swarm in different task scenarios are trained.The experimental results validate that the DDQN algorithm is efficient in terms of training UAV swarm to complete the given collaborative tasks while meeting the requirements of UAV swarm for centralization and autonomy,and improving the intelligence of UAV swarm collaborative task execution.The simulation results show that after training,the proposed UAV swarm can carry out the rendezvous task well,and the success rate of the mission reaches 90%.
文摘针对无人机(UAV)空战环境信息复杂、对抗性强所导致的敌机机动策略难以预测,以及作战胜率不高的问题,设计了一种引导Minimax-DDQN(Minimax-Double Deep Q-Network)算法。首先,在Minimax决策方法的基础上提出了一种引导式策略探索机制;然后,结合引导Minimax策略,以提升Q网络更新效率为出发点设计了一种DDQN(Double Deep Q-Network)算法;最后,提出进阶式三阶段的网络训练方法,通过不同决策模型间的对抗训练,获取更为优化的决策模型。实验结果表明,相较于Minimax-DQN(Minimax-DQN)、Minimax-DDQN等算法,所提算法追击直线目标的成功率提升了14%~60%,并且与DDQN算法的对抗胜率不低于60%。可见,与DDQN、Minimax-DDQN等算法相比,所提算法在高对抗的作战环境中具有更强的决策能力,适应性更好。
文摘针对海上船舶自主避碰决策中深度Q网络(deep Q-network,DQN)算法的高估和收敛性差的问题,提出一种融合噪声网络的裁剪双DQN(double DQN,DDQN)算法,记为NoisyNet-CDDQN算法。该算法采用裁剪双Q值的方式减小DQN算法的高估问题,并通过引入噪声网络来增强算法的稳定性以解决DQN算法收敛性差的问题。充分考虑船舶运动数学模型和船舶领域模型,并在奖励函数设计中考虑到偏航、《国际海上避碰规则》(International Regulations for Preventing Collisions at Sea,COLREGs)等要素。多会遇场景仿真实验证明,本文所提出的NoisyNet-CDDQN算法相较于融合噪声网络的DQN算法在收敛速度上提升了27.27%,相较于DDQN算法提升了54.55%,相较于DQN算法提升了87.27%,并且船舶自主避碰决策行为符合COLREGs,可为船舶的自主避碰提供参考。
文摘为解决传统的深度Q网络模型下机器人探索复杂未知环境时收敛速度慢的问题,提出了基于竞争网络结构的改进深度双Q网络方法(Improved Dueling Deep Double Q-Network,IDDDQN)。移动机器人通过改进的DDQN网络结构对其三个动作的值函数进行估计,并更新网络参数,通过训练网络得到相应的Q值。移动机器人采用玻尔兹曼分布与ε-greedy相结合的探索策略,选择一个最优动作,到达下一个观察。机器人将通过学习收集到的数据采用改进的重采样优选机制存储到缓存记忆单元中,并利用小批量数据训练网络。实验结果显示,与基本DDQN算法比,IDDDQN训练的机器人能够更快地适应未知环境,网络的收敛速度也得到提高,到达目标点的成功率增加了3倍多,在未知的复杂环境中可以更好地获取最优路径。
文摘股票市场具有变化快、干扰因素多、周期数据不足等特点,股票交易是一种不完全信息下的博弈过程,单目标的监督学习模型很难处理这类序列化决策问题。强化学习是解决该类问题的有效途径之一。提出了基于深度强化学习的智能股市操盘手模型ISTG(Intelligent Stock Trader and Gym),融合历史行情数据、技术指标、宏观经济指标等多数据类型,分析评判标准和优秀控制策略,加工长周期数据,实现可增量扩展不同类型数据的复盘模型,自动计算回报标签,训练智能操盘手,并提出直接利用行情数据计算单步确定性动作值的方法。采用中国股市1400多支的有10年以上数据的股票进行多种对比实验,ISTG的总体收益达到13%,优于买入持有总体−7%的表现。