The guidance strategy is an extremely critical factor in determining the striking effect of the missile operation.A novel guidance law is presented by exploiting the deep reinforcement learning(DRL)with the hierarchic...The guidance strategy is an extremely critical factor in determining the striking effect of the missile operation.A novel guidance law is presented by exploiting the deep reinforcement learning(DRL)with the hierarchical deep deterministic policy gradient(DDPG)algorithm.The reward functions are constructed to minimize the line-of-sight(LOS)angle rate and avoid the threat caused by the opposed obstacles.To attenuate the chattering of the acceleration,a hierarchical reinforcement learning structure and an improved reward function with action penalty are put forward.The simulation results validate that the missile under the proposed method can hit the target successfully and keep away from the threatened areas effectively.展开更多
雷达协同抗干扰决策过程中奖励存在稀疏性,导致强化学习算法难以收敛,协同训练困难。为解决该问题,提出一种分层多智能体深度确定性策略梯度(hierarchical multi-agent deep deterministic policy gradient,H-MADDPG)算法,通过稀疏奖励...雷达协同抗干扰决策过程中奖励存在稀疏性,导致强化学习算法难以收敛,协同训练困难。为解决该问题,提出一种分层多智能体深度确定性策略梯度(hierarchical multi-agent deep deterministic policy gradient,H-MADDPG)算法,通过稀疏奖励的累积提升训练过程的收敛性能,引入哈佛结构思想分别存储多智能体的训练经验以消除经验回放混乱问题。在2部和4部雷达组网仿真中,在某种强干扰条件下,雷达探测成功率比多智能体深度确定性梯度(multi-agent deep deterministic policy gradient,MADDPG)算法分别提高了15%和30%。展开更多
基金supported by the National Natural Science Foundation of China(62003021,91212304).
文摘The guidance strategy is an extremely critical factor in determining the striking effect of the missile operation.A novel guidance law is presented by exploiting the deep reinforcement learning(DRL)with the hierarchical deep deterministic policy gradient(DDPG)algorithm.The reward functions are constructed to minimize the line-of-sight(LOS)angle rate and avoid the threat caused by the opposed obstacles.To attenuate the chattering of the acceleration,a hierarchical reinforcement learning structure and an improved reward function with action penalty are put forward.The simulation results validate that the missile under the proposed method can hit the target successfully and keep away from the threatened areas effectively.
文摘雷达协同抗干扰决策过程中奖励存在稀疏性,导致强化学习算法难以收敛,协同训练困难。为解决该问题,提出一种分层多智能体深度确定性策略梯度(hierarchical multi-agent deep deterministic policy gradient,H-MADDPG)算法,通过稀疏奖励的累积提升训练过程的收敛性能,引入哈佛结构思想分别存储多智能体的训练经验以消除经验回放混乱问题。在2部和4部雷达组网仿真中,在某种强干扰条件下,雷达探测成功率比多智能体深度确定性梯度(multi-agent deep deterministic policy gradient,MADDPG)算法分别提高了15%和30%。