Neural Operators for Bypassing Gain and Control Computations in PDE Backstepping
We introduce a framework for eliminating the computation of controller gain functions in partial differential equation (PDE) control. We learn the nonlinear operator from the plant parameters to the control gains with a (deep) neural network. We provide closed-loop stability guarantees (global expon...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on automatic control 2024-08, Vol.69 (8), p.5310-5325 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a framework for eliminating the computation of controller gain functions in partial differential equation (PDE) control. We learn the nonlinear operator from the plant parameters to the control gains with a (deep) neural network. We provide closed-loop stability guarantees (global exponential) under an neural network (NN)-approximation of the feedback gains. While, in the existing PDE backstepping, finding the gain kernel requires (one offline) solution to an integral equation, the neural operator (NO) approach we propose learns the mapping from the functional coefficients of the plant PDE to the kernel function by employing a sufficiently high number of offline numerical solutions to the kernel integral equation, for a large enough number of the PDE model's different functional coefficients. We prove the existence of a DeepONet approximation, with arbitrarily high accuracy, of the exact nonlinear continuous operator mapping PDE coefficient functions into gain functions. Once proven to exist, learning of the NO is standard, completed "once and for all" (never online) and the kernel integral equation does not need to be solved ever again, for any new functional coefficient not exceeding the magnitude of the functional coefficients used for training. We also present an extension from approximating the gain kernel operator to approximating the full feedback law mapping, from plant parameter functions and state measurement functions to the control input, with semiglobal practical stability guarantees. Simulation illustrations are provided and code is available online. 1 1
https://github.com/lukebhan/NeuralOperatorsForGainKernels
This framework, eliminating real-time recomputation of gains, has the potential to be game changing for adaptive control of PDEs and gain scheduling control of nonlinear PDEs. This article requires no prior background in machine learning or neural networks. |
---|---|
ISSN: | 0018-9286 1558-2523 |
DOI: | 10.1109/TAC.2023.3347499 |