This paper proposes a liner active disturbance rejection control(LADRC) method based on the Q-Learning algorithm of reinforcement learning(RL) to control the six-degree-of-freedom motion of an autonomous underwater ve...This paper proposes a liner active disturbance rejection control(LADRC) method based on the Q-Learning algorithm of reinforcement learning(RL) to control the six-degree-of-freedom motion of an autonomous underwater vehicle(AUV).The number of controllers is increased to realize AUV motion decoupling.At the same time, in order to avoid the oversize of the algorithm, combined with the controlled content, a simplified Q-learning algorithm is constructed to realize the parameter adaptation of the LADRC controller.Finally, through the simulation experiment of the controller with fixed parameters and the controller based on the Q-learning algorithm, the rationality of the simplified algorithm, the effectiveness of parameter adaptation, and the unique advantages of the LADRC controller are verified.展开更多
For the typical first-order systems with time-delay,this paper explors the control capability of linear active disturbance rejection control(LADRC).Firstly,the critical time-delay of LADRC is analyzed using the freque...For the typical first-order systems with time-delay,this paper explors the control capability of linear active disturbance rejection control(LADRC).Firstly,the critical time-delay of LADRC is analyzed using the frequency-sweeping method and the Routh criterion,and the stable time-delay interval starting from zero is accurately obtained,which reveals the limitations of general LADRC on large time-delay.Then in view of the large time-delay,an LADRC controller is developed and verified to be effective,along with the robustness analysis.Finally,numerical simulations show the accuracy of critical time-delay,and demonstrate the effectiveness and robustness of the proposed controller compared with other modified LADRCs.展开更多
基金supported by the National Natural Science Foundation of China (6197317561973172)Tianjin Natural Science Foundation (19JCZDJC32800)。
文摘This paper proposes a liner active disturbance rejection control(LADRC) method based on the Q-Learning algorithm of reinforcement learning(RL) to control the six-degree-of-freedom motion of an autonomous underwater vehicle(AUV).The number of controllers is increased to realize AUV motion decoupling.At the same time, in order to avoid the oversize of the algorithm, combined with the controlled content, a simplified Q-learning algorithm is constructed to realize the parameter adaptation of the LADRC controller.Finally, through the simulation experiment of the controller with fixed parameters and the controller based on the Q-learning algorithm, the rationality of the simplified algorithm, the effectiveness of parameter adaptation, and the unique advantages of the LADRC controller are verified.
基金supported by the National Natural Science Foundation of China(61973175,61973172,62073177)the Key Technologies R&D Program of Tianjin(19JCZDJC32800)Tianjin Research Innovation Project for Postgraduate Students(2020YJSZXB02).
文摘For the typical first-order systems with time-delay,this paper explors the control capability of linear active disturbance rejection control(LADRC).Firstly,the critical time-delay of LADRC is analyzed using the frequency-sweeping method and the Routh criterion,and the stable time-delay interval starting from zero is accurately obtained,which reveals the limitations of general LADRC on large time-delay.Then in view of the large time-delay,an LADRC controller is developed and verified to be effective,along with the robustness analysis.Finally,numerical simulations show the accuracy of critical time-delay,and demonstrate the effectiveness and robustness of the proposed controller compared with other modified LADRCs.