Synergetic learning for unknown nonlinear H∞ control using neural networks

The well-known H∞ control design gives robustness to a controller by rejecting perturbations from the external environment, which is difficult to do for completely unknown affine nonlinear systems. Accordingly, the immediate objective of this paper is to develop an on-line real-time synergetic learn...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2023-11, Vol.168, p.287-299
Hauptverfasser: Zhu, Liao, Guo, Ping, Wei, Qinglai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The well-known H∞ control design gives robustness to a controller by rejecting perturbations from the external environment, which is difficult to do for completely unknown affine nonlinear systems. Accordingly, the immediate objective of this paper is to develop an on-line real-time synergetic learning algorithm, so that a data-driven H∞ controller can be received. By converting the H∞ control problem into a two-player zero-sum game, a model-free Hamilton–Jacobi–Isaacs equation (MF-HJIE) is first derived using off-policy reinforcement learning, followed by a proof of equivalence between the MF-HJIE and the conventional HJIE. Next, by applying the temporal difference to the MF-HJIE, a synergetic evolutionary rule with experience replay is designed to learn the optimal value function, the optimal control, and the worst perturbation, that can be performed on-line and in real-time along the system state trajectory. It is proven that the synergistic learning system constructed by the system plant and the evolutionary rule is uniformly ultimately bounded. Finally, simulation results on an F16 aircraft system and a nonlinear system back up the tractability of the proposed method. •A model-free Hamiltonian–Jacobi–Isaacs equation is derived for H∞ control problems.•A real-time on-line evolutionary rule for tuning neural network weights is developed.•The system and neural network approximation errors achieve synergetic learning.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2023.09.029