This paper investigates the guidance method based on reinforcement learning(RL)for the coplanar orbital interception in a continuous low-thrust scenario.The problem is formulated into a Markov decision process(MDP)mod...This paper investigates the guidance method based on reinforcement learning(RL)for the coplanar orbital interception in a continuous low-thrust scenario.The problem is formulated into a Markov decision process(MDP)model,then a welldesigned RL algorithm,experience based deep deterministic policy gradient(EBDDPG),is proposed to solve it.By taking the advantage of prior information generated through the optimal control model,the proposed algorithm not only resolves the convergence problem of the common RL algorithm,but also successfully trains an efficient deep neural network(DNN)controller for the chaser spacecraft to generate the control sequence.Numerical simulation results show that the proposed algorithm is feasible and the trained DNN controller significantly improves the efficiency over traditional optimization methods by roughly two orders of magnitude.展开更多
针对基于云边协同的云制造环境下制造资源实时感知数据难以及时处理的问题,考虑边缘端有限的计算资源、动态变化的网络状态以及任务负载等不确定性因素,给出一种基于混合深度强化学习(mixedbased deep reinforcement learning,M-DRL)的...针对基于云边协同的云制造环境下制造资源实时感知数据难以及时处理的问题,考虑边缘端有限的计算资源、动态变化的网络状态以及任务负载等不确定性因素,给出一种基于混合深度强化学习(mixedbased deep reinforcement learning,M-DRL)的云边协同联合卸载策略。首先,融合云端的离散模型卸载与边缘端的连续任务卸载建立联合卸载模型;其次,将一段连续时隙内综合时延与能耗总成本为目标的卸载优化问题形式化地定义为马尔可夫决策过程(MDP);最后,使用DDPG与DQN的集成探索策略、在网络架构中引入长短期记忆网络(LSTM)的M-DRL算法求解该优化问题。仿真结果表明,M-DRL与已有一些卸载算法相比具有良好的收敛性和稳定性,并显著降低了系统总成本,为制造资源感知数据及时处理提供了一种有效的解决方案。展开更多
基金supported by the National Defense Science and Technology Innovation(18-163-15-LZ-001-004-13).
文摘This paper investigates the guidance method based on reinforcement learning(RL)for the coplanar orbital interception in a continuous low-thrust scenario.The problem is formulated into a Markov decision process(MDP)model,then a welldesigned RL algorithm,experience based deep deterministic policy gradient(EBDDPG),is proposed to solve it.By taking the advantage of prior information generated through the optimal control model,the proposed algorithm not only resolves the convergence problem of the common RL algorithm,but also successfully trains an efficient deep neural network(DNN)controller for the chaser spacecraft to generate the control sequence.Numerical simulation results show that the proposed algorithm is feasible and the trained DNN controller significantly improves the efficiency over traditional optimization methods by roughly two orders of magnitude.
基金Supported by National Natural Science Foundation of China(60474035),National Research Foundation for the Doctoral Program of Higher Education of China(20050359004),Natural Science Foundation of Anhui Province(070412035)
文摘针对基于云边协同的云制造环境下制造资源实时感知数据难以及时处理的问题,考虑边缘端有限的计算资源、动态变化的网络状态以及任务负载等不确定性因素,给出一种基于混合深度强化学习(mixedbased deep reinforcement learning,M-DRL)的云边协同联合卸载策略。首先,融合云端的离散模型卸载与边缘端的连续任务卸载建立联合卸载模型;其次,将一段连续时隙内综合时延与能耗总成本为目标的卸载优化问题形式化地定义为马尔可夫决策过程(MDP);最后,使用DDPG与DQN的集成探索策略、在网络架构中引入长短期记忆网络(LSTM)的M-DRL算法求解该优化问题。仿真结果表明,M-DRL与已有一些卸载算法相比具有良好的收敛性和稳定性,并显著降低了系统总成本,为制造资源感知数据及时处理提供了一种有效的解决方案。