An iterative Q‐learning scheme for the global stabilization of discrete‐time linear systems subject to actuator saturation
Summary In this paper, we propose a model‐free algorithm for global stabilization of linear systems subject to actuator saturation. The idea of gain‐scheduled low gain feedback is applied to develop control laws that avoid saturation and achieve global stabilization. To design these control laws, we...
Gespeichert in:
Veröffentlicht in: | International journal of robust and nonlinear control 2019-06, Vol.29 (9), p.2660-2672 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Summary
In this paper, we propose a model‐free algorithm for global stabilization of linear systems subject to actuator saturation. The idea of gain‐scheduled low gain feedback is applied to develop control laws that avoid saturation and achieve global stabilization. To design these control laws, we employ the framework of parameterized algebraic Riccati equations (AREs). Reinforcement learning techniques are developed to find the solution of the parameterized ARE without requiring any knowledge of the system dynamics. In particular, we present an iterative Q‐learning scheme that searches for a low gain parameter and iteratively solves the parameterized ARE using the Bellman equation. Both state feedback and output feedback algorithms are developed. It is shown that the proposed scheme achieves model‐free global stabilization under bounded controls and convergence to the optimal solution of the ARE is achieved. Simulation results are presented that confirm the effectiveness of the proposed method. |
---|---|
ISSN: | 1049-8923 1099-1239 |
DOI: | 10.1002/rnc.4514 |