DOI: 10.1002/rnc.7280 ISSN: 1049-8923

Model‐free distributed optimal control for general discrete‐time linear systems using reinforcement learning

Xinjun Feng, Zhiyun Zhao, Wen Yang
  • Electrical and Electronic Engineering
  • Industrial and Manufacturing Engineering
  • Mechanical Engineering
  • Aerospace Engineering
  • Biomedical Engineering
  • General Chemical Engineering
  • Control and Systems Engineering


This article proposes a novel data‐driven framework of distributed optimal consensus for discrete‐time linear multi‐agent systems under general digraphs. A fully distributed control protocol is proposed by using linear quadratic regulator approach, which is proved to be a sufficient and necessary condition for optimal control of multi‐agent systems through dynamic programming and minimum principle. Moreover, the control protocol can be constructed by using local information with the aid of the solution of the algebraic Riccati equation (ARE). Based on the Q‐learning method, a reinforcement learning framework is presented to find the solution of the ARE in a data‐driven way, in which we only need to collect information from an arbitrary follower to learn the feedback gain matrix. Thus, the multi‐agent system can achieve distributed optimal consensus when system dynamics and global information are completely unavailable. For output feedback cases, accurate state information estimation is established such that optimal consensus control is realized. Moreover, the data‐driven optimal consensus method designed in this article is applicable to general digraph that contains a directed spanning tree. Finally, numerical simulations verify the validity of the proposed optimal control protocols and data‐driven framework.

More from our Archive