Equipment development planning(EDP)is usually a long-term process often performed in an environment with high uncertainty.The traditional multi-stage dynamic programming cannot cope with this kind of uncertainty with ...Equipment development planning(EDP)is usually a long-term process often performed in an environment with high uncertainty.The traditional multi-stage dynamic programming cannot cope with this kind of uncertainty with unpredictable situations.To deal with this problem,a multi-stage EDP model based on a deep reinforcement learning(DRL)algorithm is proposed to respond quickly to any environmental changes within a reasonable range.Firstly,the basic problem of multi-stage EDP is described,and a mathematical planning model is constructed.Then,for two kinds of uncertainties(future capabi lity requirements and the amount of investment in each stage),a corresponding DRL framework is designed to define the environment,state,action,and reward function for multi-stage EDP.After that,the dueling deep Q-network(Dueling DQN)algorithm is used to solve the multi-stage EDP to generate an approximately optimal multi-stage equipment development scheme.Finally,a case of ten kinds of equipment in 100 possible environments,which are randomly generated,is used to test the feasibility and effectiveness of the proposed models.The results show that the algorithm can respond instantaneously in any state of the multistage EDP environment and unlike traditional algorithms,the algorithm does not need to re-optimize the problem for any change in the environment.In addition,the algorithm can flexibly adjust at subsequent planning stages in the event of a change to the equipment capability requirements to adapt to the new requirements.展开更多
为了解决增程式电动轻卡辅助动力单元(auxiliary power units,APU)和动力电池之间能量的合理分配问题,在Simulink中建立面向控制的仿真模型,并提出一种基于双延迟深度确定性策略梯度(twin delayed deep deterministic policy gradient,T...为了解决增程式电动轻卡辅助动力单元(auxiliary power units,APU)和动力电池之间能量的合理分配问题,在Simulink中建立面向控制的仿真模型,并提出一种基于双延迟深度确定性策略梯度(twin delayed deep deterministic policy gradient,TD3)算法的实时能量管理策略,以发动机燃油消耗量、电池荷电状态(state of charge,SOC)变化等为优化目标,在世界轻型车辆测试程序(world light vehicle test procedure,WLTP)中对深度强化学习智能体进行训练。仿真结果表明,利用不同工况验证了基于TD3算法的能量管理策略(energy management strategy,EMS)具有较好的稳定性和适应性;TD3算法实现对发动机转速和转矩连续控制,使得输出功率更加平滑。将基于TD3算法的EMS与基于传统深度Q网络(deep Q-network,DQN)算法和深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法进行对比分析,结果表明:基于TD3算法的EMS燃油经济性分别相比基于DQN算法和DDPG算法提高了12.35%和0.67%,达到基于动态规划(dynamic programming,DP)算法的94.85%,收敛速度相比基于DQN算法和DDPG算法分别提高了40.00%和47.60%。展开更多
基金supported by the National Natural Science Foundation of China(71690233,72001209)the Scientific Research Foundation of the National University of Defense Technology(ZK19-16)。
文摘Equipment development planning(EDP)is usually a long-term process often performed in an environment with high uncertainty.The traditional multi-stage dynamic programming cannot cope with this kind of uncertainty with unpredictable situations.To deal with this problem,a multi-stage EDP model based on a deep reinforcement learning(DRL)algorithm is proposed to respond quickly to any environmental changes within a reasonable range.Firstly,the basic problem of multi-stage EDP is described,and a mathematical planning model is constructed.Then,for two kinds of uncertainties(future capabi lity requirements and the amount of investment in each stage),a corresponding DRL framework is designed to define the environment,state,action,and reward function for multi-stage EDP.After that,the dueling deep Q-network(Dueling DQN)algorithm is used to solve the multi-stage EDP to generate an approximately optimal multi-stage equipment development scheme.Finally,a case of ten kinds of equipment in 100 possible environments,which are randomly generated,is used to test the feasibility and effectiveness of the proposed models.The results show that the algorithm can respond instantaneously in any state of the multistage EDP environment and unlike traditional algorithms,the algorithm does not need to re-optimize the problem for any change in the environment.In addition,the algorithm can flexibly adjust at subsequent planning stages in the event of a change to the equipment capability requirements to adapt to the new requirements.