This paper presents a deep reinforcement learning(DRL)-based motion control method to provide unmanned aerial vehicles(UAVs)with additional flexibility while flying across dynamic unknown environments autonomously.Thi...This paper presents a deep reinforcement learning(DRL)-based motion control method to provide unmanned aerial vehicles(UAVs)with additional flexibility while flying across dynamic unknown environments autonomously.This method is applicable in both military and civilian fields such as penetration and rescue.The autonomous motion control problem is addressed through motion planning,action interpretation,trajectory tracking,and vehicle movement within the DRL framework.Novel DRL algorithms are presented by combining two difference-amplifying approaches with traditional DRL methods and are used for solving the motion planning problem.An improved Lyapunov guidance vector field(LGVF)method is used to handle the trajectory-tracking problem and provide guidance control commands for the UAV.In contrast to conventional motion-control approaches,the proposed methods directly map the sensorbased detections and measurements into control signals for the inner loop of the UAV,i.e.,an end-to-end control.The training experiment results show that the novel DRL algorithms provide more than a 20%performance improvement over the state-ofthe-art DRL algorithms.The testing experiment results demonstrate that the controller based on the novel DRL and LGVF,which is only trained once in a static environment,enables the UAV to fly autonomously in various dynamic unknown environments.Thus,the proposed technique provides strong flexibility for the controller.展开更多
A robust H∞ directional controller for a sampled-data autonomous airship with polytopic parameter uncertainties was proposed. By input delay approach, the linearized airship model was transformed into a continuous-ti...A robust H∞ directional controller for a sampled-data autonomous airship with polytopic parameter uncertainties was proposed. By input delay approach, the linearized airship model was transformed into a continuous-time system with time-varying delay. Sufficient conditions were then established based on the constructed Lyapunov-Krasovskii functional, which guarantee that the system is mean-square exponentially stable with H∞ performance. The desired controller can be obtained by solving the obtained conditions. Simulation results show that guaranteed minimum H∞ performance γ=1.4037 and fast response of attitude for sampled-data autonomous airship are achieved in spite of the existence of parameter uncertainties.展开更多
基金supported by the National Natural Science Foundation of China(62003267)the Natural Science Foundation of Shaanxi Province(2020JQ-220)the Open Project of Science and Technology on Electronic Information Control Laboratory(JS20201100339)。
文摘This paper presents a deep reinforcement learning(DRL)-based motion control method to provide unmanned aerial vehicles(UAVs)with additional flexibility while flying across dynamic unknown environments autonomously.This method is applicable in both military and civilian fields such as penetration and rescue.The autonomous motion control problem is addressed through motion planning,action interpretation,trajectory tracking,and vehicle movement within the DRL framework.Novel DRL algorithms are presented by combining two difference-amplifying approaches with traditional DRL methods and are used for solving the motion planning problem.An improved Lyapunov guidance vector field(LGVF)method is used to handle the trajectory-tracking problem and provide guidance control commands for the UAV.In contrast to conventional motion-control approaches,the proposed methods directly map the sensorbased detections and measurements into control signals for the inner loop of the UAV,i.e.,an end-to-end control.The training experiment results show that the novel DRL algorithms provide more than a 20%performance improvement over the state-ofthe-art DRL algorithms.The testing experiment results demonstrate that the controller based on the novel DRL and LGVF,which is only trained once in a static environment,enables the UAV to fly autonomously in various dynamic unknown environments.Thus,the proposed technique provides strong flexibility for the controller.
基金Projects(51205253,11272205)supported by the National Natural Science Foundation of ChinaProject(2012AA7052005)supported by the National High Technology Research and Development Program of China
文摘A robust H∞ directional controller for a sampled-data autonomous airship with polytopic parameter uncertainties was proposed. By input delay approach, the linearized airship model was transformed into a continuous-time system with time-varying delay. Sufficient conditions were then established based on the constructed Lyapunov-Krasovskii functional, which guarantee that the system is mean-square exponentially stable with H∞ performance. The desired controller can be obtained by solving the obtained conditions. Simulation results show that guaranteed minimum H∞ performance γ=1.4037 and fast response of attitude for sampled-data autonomous airship are achieved in spite of the existence of parameter uncertainties.