Dynamic soaring,inspired by the wind-riding flight of birds such as albatrosses,is a biomimetic technique which leverages wind fields to enhance the endurance of unmanned aerial vehicles(UAVs).Achieving a precise soar...Dynamic soaring,inspired by the wind-riding flight of birds such as albatrosses,is a biomimetic technique which leverages wind fields to enhance the endurance of unmanned aerial vehicles(UAVs).Achieving a precise soaring trajectory is crucial for maximizing energy efficiency during flight.Existing nonlinear programming methods are heavily dependent on the choice of initial values which is hard to determine.Therefore,this paper introduces a deep reinforcement learning method based on a differentially flat model for dynamic soaring trajectory planning and optimization.Initially,the gliding trajectory is parameterized using Fourier basis functions,achieving a flexible trajectory representation with a minimal number of hyperparameters.Subsequently,the trajectory optimization problem is formulated as a dynamic interactive process of Markov decision-making.The hyperparameters of the trajectory are optimized using the Proximal Policy Optimization(PPO2)algorithm from deep reinforcement learning(DRL),reducing the strong reliance on initial value settings in the optimization process.Finally,a comparison between the proposed method and the nonlinear programming method reveals that the trajectory generated by the proposed approach is smoother while meeting the same performance requirements.Specifically,the proposed method achieves a 34%reduction in maximum thrust,a 39.4%decrease in maximum thrust difference,and a 33%reduction in maximum airspeed difference.展开更多
To solve the problem of multi-target hunting by an unmanned surface vehicle(USV)fleet,a hunting algorithm based on multi-agent reinforcement learning is proposed.Firstly,the hunting environment and kinematic model wit...To solve the problem of multi-target hunting by an unmanned surface vehicle(USV)fleet,a hunting algorithm based on multi-agent reinforcement learning is proposed.Firstly,the hunting environment and kinematic model without boundary constraints are built,and the criteria for successful target capture are given.Then,the cooperative hunting problem of a USV fleet is modeled as a decentralized partially observable Markov decision process(Dec-POMDP),and a distributed partially observable multitarget hunting Proximal Policy Optimization(DPOMH-PPO)algorithm applicable to USVs is proposed.In addition,an observation model,a reward function and the action space applicable to multi-target hunting tasks are designed.To deal with the dynamic change of observational feature dimension input by partially observable systems,a feature embedding block is proposed.By combining the two feature compression methods of column-wise max pooling(CMP)and column-wise average-pooling(CAP),observational feature encoding is established.Finally,the centralized training and decentralized execution framework is adopted to complete the training of hunting strategy.Each USV in the fleet shares the same policy and perform actions independently.Simulation experiments have verified the effectiveness of the DPOMH-PPO algorithm in the test scenarios with different numbers of USVs.Moreover,the advantages of the proposed model are comprehensively analyzed from the aspects of algorithm performance,migration effect in task scenarios and self-organization capability after being damaged,the potential deployment and application of DPOMH-PPO in the real environment is verified.展开更多
[目的/意义]为解决当前作物管理中个性化需求难以捕捉、决策过程缺乏灵活性难题,本研究提出了一种基于大语言模型的个性化作物生产智能决策方法[方法]通过自然语言对话收集用户在蔬菜作物管理过程中的个性化需求,涵盖产量、人力资源消...[目的/意义]为解决当前作物管理中个性化需求难以捕捉、决策过程缺乏灵活性难题,本研究提出了一种基于大语言模型的个性化作物生产智能决策方法[方法]通过自然语言对话收集用户在蔬菜作物管理过程中的个性化需求,涵盖产量、人力资源消耗和水肥消耗等方面。随后,将作物管理过程建模为多目标优化问题,同时考虑用户个性化偏好和作物产量,并采用强化学习算法来学习作物管理策略。水肥管理策略的训练通过与环境的交互持续更新,学习在不同条件下采取何种行动以实现最优决策,从而实现个性化的作物管理。[结果和讨论]在gym-DSSAT(Gym-Decision Support System for Agrotechnology Transfer)仿真平台上进行的实验,结果表明,所提出的个性化作物生产智能决策方法能够有效地根据用户的个性化偏好调整作物管理策略。[结论]通过精准捕捉用户的个性化需求,该方法在保证作物产量的同时,优化了人力资源与水肥资源的消耗。展开更多
受限于模型方程决定的固定结构,传统四旋翼控制器设计难以有效应对模型参数和环境扰动变化带来的控制误差。提出了基于深度强化学习的四旋翼航迹跟踪控制方法,构建了对应的马尔可夫决策模型,并基于PPO框架提出了PPO-SAG(PPO with self a...受限于模型方程决定的固定结构,传统四旋翼控制器设计难以有效应对模型参数和环境扰动变化带来的控制误差。提出了基于深度强化学习的四旋翼航迹跟踪控制方法,构建了对应的马尔可夫决策模型,并基于PPO框架提出了PPO-SAG(PPO with self adaptive guide)算法。PPO-SAG在学习过程中加入自适应机制,利用PID专家知识进行引导和学习,提高了训练的收敛效果和稳定性。根据问题特点,设计了带有距离约束惩罚和熵策略的目标函数,提出扰动误差信息补充结构和航迹特征选择结构,补充控制误差信息、提取未来航迹关键要素,提高了收敛效果。并利用状态动态标准化、优势函数批标准化及奖励缩放策略,更合理地处理三维空间中的状态表征和奖励优势表达。单种航迹与混合航迹实验表明,所提出的PPO-SAG算法在收敛效果和稳定性上均取得了最好的效果,消融实验说明所提出的改进机制和结构均起到正向作用。所研究的未知扰动下基于深度强化学习的四旋翼航迹跟踪控制问题,为设计更加鲁棒高效的四旋翼控制器提供了解决方案。展开更多
基金support received by the National Natural Science Foundation of China(Grant Nos.52372398&62003272).
文摘Dynamic soaring,inspired by the wind-riding flight of birds such as albatrosses,is a biomimetic technique which leverages wind fields to enhance the endurance of unmanned aerial vehicles(UAVs).Achieving a precise soaring trajectory is crucial for maximizing energy efficiency during flight.Existing nonlinear programming methods are heavily dependent on the choice of initial values which is hard to determine.Therefore,this paper introduces a deep reinforcement learning method based on a differentially flat model for dynamic soaring trajectory planning and optimization.Initially,the gliding trajectory is parameterized using Fourier basis functions,achieving a flexible trajectory representation with a minimal number of hyperparameters.Subsequently,the trajectory optimization problem is formulated as a dynamic interactive process of Markov decision-making.The hyperparameters of the trajectory are optimized using the Proximal Policy Optimization(PPO2)algorithm from deep reinforcement learning(DRL),reducing the strong reliance on initial value settings in the optimization process.Finally,a comparison between the proposed method and the nonlinear programming method reveals that the trajectory generated by the proposed approach is smoother while meeting the same performance requirements.Specifically,the proposed method achieves a 34%reduction in maximum thrust,a 39.4%decrease in maximum thrust difference,and a 33%reduction in maximum airspeed difference.
基金financial support from National Natural Science Foundation of China(Grant No.61601491)Natural Science Foundation of Hubei Province,China(Grant No.2018CFC865)Military Research Project of China(-Grant No.YJ2020B117)。
文摘To solve the problem of multi-target hunting by an unmanned surface vehicle(USV)fleet,a hunting algorithm based on multi-agent reinforcement learning is proposed.Firstly,the hunting environment and kinematic model without boundary constraints are built,and the criteria for successful target capture are given.Then,the cooperative hunting problem of a USV fleet is modeled as a decentralized partially observable Markov decision process(Dec-POMDP),and a distributed partially observable multitarget hunting Proximal Policy Optimization(DPOMH-PPO)algorithm applicable to USVs is proposed.In addition,an observation model,a reward function and the action space applicable to multi-target hunting tasks are designed.To deal with the dynamic change of observational feature dimension input by partially observable systems,a feature embedding block is proposed.By combining the two feature compression methods of column-wise max pooling(CMP)and column-wise average-pooling(CAP),observational feature encoding is established.Finally,the centralized training and decentralized execution framework is adopted to complete the training of hunting strategy.Each USV in the fleet shares the same policy and perform actions independently.Simulation experiments have verified the effectiveness of the DPOMH-PPO algorithm in the test scenarios with different numbers of USVs.Moreover,the advantages of the proposed model are comprehensively analyzed from the aspects of algorithm performance,migration effect in task scenarios and self-organization capability after being damaged,the potential deployment and application of DPOMH-PPO in the real environment is verified.
文摘[目的/意义]为解决当前作物管理中个性化需求难以捕捉、决策过程缺乏灵活性难题,本研究提出了一种基于大语言模型的个性化作物生产智能决策方法[方法]通过自然语言对话收集用户在蔬菜作物管理过程中的个性化需求,涵盖产量、人力资源消耗和水肥消耗等方面。随后,将作物管理过程建模为多目标优化问题,同时考虑用户个性化偏好和作物产量,并采用强化学习算法来学习作物管理策略。水肥管理策略的训练通过与环境的交互持续更新,学习在不同条件下采取何种行动以实现最优决策,从而实现个性化的作物管理。[结果和讨论]在gym-DSSAT(Gym-Decision Support System for Agrotechnology Transfer)仿真平台上进行的实验,结果表明,所提出的个性化作物生产智能决策方法能够有效地根据用户的个性化偏好调整作物管理策略。[结论]通过精准捕捉用户的个性化需求,该方法在保证作物产量的同时,优化了人力资源与水肥资源的消耗。
文摘受限于模型方程决定的固定结构,传统四旋翼控制器设计难以有效应对模型参数和环境扰动变化带来的控制误差。提出了基于深度强化学习的四旋翼航迹跟踪控制方法,构建了对应的马尔可夫决策模型,并基于PPO框架提出了PPO-SAG(PPO with self adaptive guide)算法。PPO-SAG在学习过程中加入自适应机制,利用PID专家知识进行引导和学习,提高了训练的收敛效果和稳定性。根据问题特点,设计了带有距离约束惩罚和熵策略的目标函数,提出扰动误差信息补充结构和航迹特征选择结构,补充控制误差信息、提取未来航迹关键要素,提高了收敛效果。并利用状态动态标准化、优势函数批标准化及奖励缩放策略,更合理地处理三维空间中的状态表征和奖励优势表达。单种航迹与混合航迹实验表明,所提出的PPO-SAG算法在收敛效果和稳定性上均取得了最好的效果,消融实验说明所提出的改进机制和结构均起到正向作用。所研究的未知扰动下基于深度强化学习的四旋翼航迹跟踪控制问题,为设计更加鲁棒高效的四旋翼控制器提供了解决方案。