Off-Policy Reinforcement Learning for Tracking in Continuous-Time Systems on Two Time Scales

This article applies a singular perturbation theory to solve an optimal linear quadratic tracker problem for a continuous-time two-time-scale process. Previously, singular perturbation was applied for system regulation. It is shown that the two-time-scale tracking problem can be separated into a lin...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2021-10, Vol.32 (10), p.4334-4346
Hauptverfasser: Xue, Wenqian, Fan, Jialu, Lopez, Victor G., Jiang, Yi, Chai, Tianyou, Lewis, Frank L.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article applies a singular perturbation theory to solve an optimal linear quadratic tracker problem for a continuous-time two-time-scale process. Previously, singular perturbation was applied for system regulation. It is shown that the two-time-scale tracking problem can be separated into a linear-quadratic tracker (LQT) problem for the slow system and a linear-quadratic regulator (LQR) problem for the fast system. We prove that the solutions to these two reduced-order control problems can approximate the LQT solution of the original control problem. The reduced-order slow LQT and fast LQR control problems are solved by off-policy integral reinforcement learning (IRL) using only measured data from the system. To test the effectiveness of the proposed method, we use an industrial thickening process as a simulation example and compare our method to a method with the known system model and a method without time-scale separation.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2020.3017461