Dynamic event-triggered controller design for nonlinear systems: Reinforcement learning strategy

The current investigation aims at the optimal control problem for discrete-time nonstrict-feedback nonlinear systems by invoking the reinforcement learning-based backstepping technique and neural networks. The dynamic-event-triggered control strategy introduced in this paper can alleviate the commun...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2023-06, Vol.163, p.341-353
Hauptverfasser: Wang, Zichen, Wang, Xin, Pang, Ning
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The current investigation aims at the optimal control problem for discrete-time nonstrict-feedback nonlinear systems by invoking the reinforcement learning-based backstepping technique and neural networks. The dynamic-event-triggered control strategy introduced in this paper can alleviate the communication frequency between the actuator and controller. Based on the reinforcement learning strategy, actor–critic neural networks are employed to implement the n-order backstepping framework. Then, a neural network weight-updated algorithm is developed to minimize the computational burden and avoid the local optimal problem. Furthermore, a novel dynamic-event-triggered strategy is introduced, which can remarkably outperform the previously studied static-event-triggered strategy. Moreover, combined with the Lyapunov stability theory, all signals in the closed-loop system are strictly proven to be semiglobal uniformly ultimately bounded. Finally, the practicality of the offered control algorithms is further elucidated by the numerical simulation examples.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2023.04.008