Neural network‐based optimal tracking control for partially unknown discrete‐time non‐linear systems using reinforcement learning

Otimal tracking control of discrete‐time non‐linear systems is investigated in this paper. The system drift dynamics is unknown in this investigation. Firstly, in the light of the discrete‐time non‐linear systems and reference signal, an augmented system is constructed. Optimal tracking control prob...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET control theory & applications 2021-01, Vol.15 (2), p.260-271
Hauptverfasser: Zhao, Jingang, Vishal, Prateek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Otimal tracking control of discrete‐time non‐linear systems is investigated in this paper. The system drift dynamics is unknown in this investigation. Firstly, in the light of the discrete‐time non‐linear systems and reference signal, an augmented system is constructed. Optimal tracking control problem of original non‐linear systems is thus transformed into solving optimal regulation problem of the augmented systems. The solution to optimal regulation problem can be found by solving its Hamilton–Jacobi–Bellman (HJB) equation. To solve the HJB equation, a new critic‐actor neural network (NN) structure‐based online reinforcement learning (RL) scheme is proposed to learn the solution of HJB equation while the corresponding optimal control input that minimizes the HJB equation is calculated in a forward‐in‐time manner without requiring any value, policy iterations and the system drift dynamics. The Uniformly Ultimately Boundedness (UUB) of NN weight errors and closed‐loop augmented system states are provided via the Lyapunov theory. Finally, simulation results are given to validate the proposed scheme.
ISSN:1751-8644
1751-8652
DOI:10.1049/cth2.12037