This paper presents a deep reinforcement learning(DRL)-based motion control method to provide unmanned aerial vehicles(UAVs)with additional flexibility while flying across dynamic unknown environments autonomously.Thi...This paper presents a deep reinforcement learning(DRL)-based motion control method to provide unmanned aerial vehicles(UAVs)with additional flexibility while flying across dynamic unknown environments autonomously.This method is applicable in both military and civilian fields such as penetration and rescue.The autonomous motion control problem is addressed through motion planning,action interpretation,trajectory tracking,and vehicle movement within the DRL framework.Novel DRL algorithms are presented by combining two difference-amplifying approaches with traditional DRL methods and are used for solving the motion planning problem.An improved Lyapunov guidance vector field(LGVF)method is used to handle the trajectory-tracking problem and provide guidance control commands for the UAV.In contrast to conventional motion-control approaches,the proposed methods directly map the sensorbased detections and measurements into control signals for the inner loop of the UAV,i.e.,an end-to-end control.The training experiment results show that the novel DRL algorithms provide more than a 20%performance improvement over the state-ofthe-art DRL algorithms.The testing experiment results demonstrate that the controller based on the novel DRL and LGVF,which is only trained once in a static environment,enables the UAV to fly autonomously in various dynamic unknown environments.Thus,the proposed technique provides strong flexibility for the controller.展开更多
This paper proposes a liner active disturbance rejection control(LADRC) method based on the Q-Learning algorithm of reinforcement learning(RL) to control the six-degree-of-freedom motion of an autonomous underwater ve...This paper proposes a liner active disturbance rejection control(LADRC) method based on the Q-Learning algorithm of reinforcement learning(RL) to control the six-degree-of-freedom motion of an autonomous underwater vehicle(AUV).The number of controllers is increased to realize AUV motion decoupling.At the same time, in order to avoid the oversize of the algorithm, combined with the controlled content, a simplified Q-learning algorithm is constructed to realize the parameter adaptation of the LADRC controller.Finally, through the simulation experiment of the controller with fixed parameters and the controller based on the Q-learning algorithm, the rationality of the simplified algorithm, the effectiveness of parameter adaptation, and the unique advantages of the LADRC controller are verified.展开更多
Autonomous umanned aerial vehicle(UAV) manipulation is necessary for the defense department to execute tactical missions given by commanders in the future unmanned battlefield. A large amount of research has been devo...Autonomous umanned aerial vehicle(UAV) manipulation is necessary for the defense department to execute tactical missions given by commanders in the future unmanned battlefield. A large amount of research has been devoted to improving the autonomous decision-making ability of UAV in an interactive environment, where finding the optimal maneuvering decisionmaking policy became one of the key issues for enabling the intelligence of UAV. In this paper, we propose a maneuvering decision-making algorithm for autonomous air-delivery based on deep reinforcement learning under the guidance of expert experience. Specifically, we refine the guidance towards area and guidance towards specific point tasks for the air-delivery process based on the traditional air-to-surface fire control methods.Moreover, we construct the UAV maneuvering decision-making model based on Markov decision processes(MDPs). Specifically, we present a reward shaping method for the guidance towards area and guidance towards specific point tasks using potential-based function and expert-guided advice. The proposed algorithm could accelerate the convergence of the maneuvering decision-making policy and increase the stability of the policy in terms of the output during the later stage of training process. The effectiveness of the proposed maneuvering decision-making policy is illustrated by the curves of training parameters and extensive experimental results for testing the trained policy.展开更多
针对当前两阶段的点云目标检测算法PointRCNN:3D object proposal generation and detection from point cloud在点云降采样阶段时间开销大以及低效性的问题,本研究基于PointRCNN网络提出RandLA-RCNN(random sampling and an effectivel...针对当前两阶段的点云目标检测算法PointRCNN:3D object proposal generation and detection from point cloud在点云降采样阶段时间开销大以及低效性的问题,本研究基于PointRCNN网络提出RandLA-RCNN(random sampling and an effectivelocal feature aggregator with region-based convolu-tional neural networks)架构。首先,利用随机采样方法在处理庞大点云数据时的高效性,对大场景点云数据进行下采样;然后,通过对输入点云的每个近邻点的空间位置编码,有效提高从每个点的邻域提取局部特征的能力,并利用基于注意力机制的池化规则聚合局部特征向量,获取全局特征;最后使用由多个局部空间编码单元和注意力池化单元叠加形成的扩展残差模块,来进一步增强每个点的全局特征,避免关键点信息丢失。实验结果表明,该检测算法在保留PointRCNN网络对3D目标的检测优势的同时,相比PointRCNN检测速度提升近两倍,达到16 f/s的推理速度。展开更多
基金supported by the National Natural Science Foundation of China(62003267)the Natural Science Foundation of Shaanxi Province(2020JQ-220)the Open Project of Science and Technology on Electronic Information Control Laboratory(JS20201100339)。
文摘This paper presents a deep reinforcement learning(DRL)-based motion control method to provide unmanned aerial vehicles(UAVs)with additional flexibility while flying across dynamic unknown environments autonomously.This method is applicable in both military and civilian fields such as penetration and rescue.The autonomous motion control problem is addressed through motion planning,action interpretation,trajectory tracking,and vehicle movement within the DRL framework.Novel DRL algorithms are presented by combining two difference-amplifying approaches with traditional DRL methods and are used for solving the motion planning problem.An improved Lyapunov guidance vector field(LGVF)method is used to handle the trajectory-tracking problem and provide guidance control commands for the UAV.In contrast to conventional motion-control approaches,the proposed methods directly map the sensorbased detections and measurements into control signals for the inner loop of the UAV,i.e.,an end-to-end control.The training experiment results show that the novel DRL algorithms provide more than a 20%performance improvement over the state-ofthe-art DRL algorithms.The testing experiment results demonstrate that the controller based on the novel DRL and LGVF,which is only trained once in a static environment,enables the UAV to fly autonomously in various dynamic unknown environments.Thus,the proposed technique provides strong flexibility for the controller.
基金supported by the National Natural Science Foundation of China (6197317561973172)Tianjin Natural Science Foundation (19JCZDJC32800)。
文摘This paper proposes a liner active disturbance rejection control(LADRC) method based on the Q-Learning algorithm of reinforcement learning(RL) to control the six-degree-of-freedom motion of an autonomous underwater vehicle(AUV).The number of controllers is increased to realize AUV motion decoupling.At the same time, in order to avoid the oversize of the algorithm, combined with the controlled content, a simplified Q-learning algorithm is constructed to realize the parameter adaptation of the LADRC controller.Finally, through the simulation experiment of the controller with fixed parameters and the controller based on the Q-learning algorithm, the rationality of the simplified algorithm, the effectiveness of parameter adaptation, and the unique advantages of the LADRC controller are verified.
基金supported by the Key Research and Development Program of Shaanxi (2022GXLH-02-09)the Aeronautical Science Foundation of China (20200051053001)the Natural Science Basic Research Program of Shaanxi (2020JM-147)。
文摘Autonomous umanned aerial vehicle(UAV) manipulation is necessary for the defense department to execute tactical missions given by commanders in the future unmanned battlefield. A large amount of research has been devoted to improving the autonomous decision-making ability of UAV in an interactive environment, where finding the optimal maneuvering decisionmaking policy became one of the key issues for enabling the intelligence of UAV. In this paper, we propose a maneuvering decision-making algorithm for autonomous air-delivery based on deep reinforcement learning under the guidance of expert experience. Specifically, we refine the guidance towards area and guidance towards specific point tasks for the air-delivery process based on the traditional air-to-surface fire control methods.Moreover, we construct the UAV maneuvering decision-making model based on Markov decision processes(MDPs). Specifically, we present a reward shaping method for the guidance towards area and guidance towards specific point tasks using potential-based function and expert-guided advice. The proposed algorithm could accelerate the convergence of the maneuvering decision-making policy and increase the stability of the policy in terms of the output during the later stage of training process. The effectiveness of the proposed maneuvering decision-making policy is illustrated by the curves of training parameters and extensive experimental results for testing the trained policy.
文摘针对当前两阶段的点云目标检测算法PointRCNN:3D object proposal generation and detection from point cloud在点云降采样阶段时间开销大以及低效性的问题,本研究基于PointRCNN网络提出RandLA-RCNN(random sampling and an effectivelocal feature aggregator with region-based convolu-tional neural networks)架构。首先,利用随机采样方法在处理庞大点云数据时的高效性,对大场景点云数据进行下采样;然后,通过对输入点云的每个近邻点的空间位置编码,有效提高从每个点的邻域提取局部特征的能力,并利用基于注意力机制的池化规则聚合局部特征向量,获取全局特征;最后使用由多个局部空间编码单元和注意力池化单元叠加形成的扩展残差模块,来进一步增强每个点的全局特征,避免关键点信息丢失。实验结果表明,该检测算法在保留PointRCNN网络对3D目标的检测优势的同时,相比PointRCNN检测速度提升近两倍,达到16 f/s的推理速度。