期刊论文详细信息
IEEE Access
Adaptive Neural Network Optimized Control Using Reinforcement Learning of Critic-Actor Architecture for a Class of Non-Affine Nonlinear Systems
Bin Li1  Guoxing Wen1  Xue Yang1 
[1] School of Mathematics and Statistics, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China;
关键词: Non-affine nonlinear system;    optimal control;    reinforcement learning (RL);    neural network (NN);    Lyapunov function;   
DOI  :  10.1109/ACCESS.2021.3120835
来源: DOAJ
【 摘 要 】

In this article, an optimized tracking control using critic-actor reinforcement learning (RL) strategy is investigated for a class of non-affine nonlinear continuous-time systems. Since the non-affine system is with the implicit control input in dynamic equation, it is a more general modeling form than the affine case, hence this also makes the optimized control more challenging and rewarding. However, most existing RL-based optimal controllers are very complex in algorithm because their actor and critic training laws obtained by implementing gradient descent on the square of Bellman residual error, which equals to the approximation of Hamilton-Jacobi-Bellman (HJB) equation, hence these methods are difficult to be extended to non-affine systems. In this optimized control, the RL algorithm is produced from implementing gradient descent to a simple positive-definite function, which is derived from HJB equation’s partial derivative. As a result, the proposed control algorithm can be significantly simple so as to alleviate the computational burden. Finally, a typical numerical simulation is carried out, and the results also further confirm effectiveness of the proposed control scheme.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次