DOI: 10.1049/cth2.12610 ISSN: 1751-8644

An efficient model‐free adaptive optimal control of continuous‐time nonlinear non‐zero‐sum games based on integral reinforcement learning with exploration

Lei Guo, Wenbo Xiong, Yuan Song, Dongming Gan
  • Electrical and Electronic Engineering
  • Control and Optimization
  • Computer Science Applications
  • Human-Computer Interaction
  • Control and Systems Engineering


To reduce the learning time and space occupation, this study presents a novel model‐free algorithm for obtaining the Nash equilibrium solution of continuous‐time nonlinear non‐zero‐sum games. Based on the integral reinforcement learning method, a new integral HJ equation that can quickly and cooperatively determine the Nash equilibrium strategies of all players is proposed. By leveraging the neural network approximation and gradient descent method, simultaneous continuous‐time adaptive tuning laws are provided for both critic and actor neural network weights. These laws facilitate the estimation of the optimal value function and optimal policy without requiring knowledge or identification of the system's dynamics. The closed‐loop system stability and convergence of weights are guaranteed through the Lyapunov analysis. Additionally, the algorithm is enhanced to reduce the number of auxiliary NNs used in the critic. The simulation results for a two‐player non‐zero‐sum game validate the effectiveness of the proposed algorithm.

More from our Archive