Approximate dynamic programming based optimal control applied to an integrated plant with a reactor and a distillation column with recycle
An approximate dynamic programming (ADP) method has shown good performance in solving optimal control problems in many small-scale process control applications. The offline computational procedure of ADP constructs an approximation of the optimal "cost-to-go" function, which parameterizes...
Gespeichert in:
Veröffentlicht in: | AIChE journal 2009-04, Vol.55 (4), p.919-930 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | An approximate dynamic programming (ADP) method has shown good performance in solving optimal control problems in many small-scale process control applications. The offline computational procedure of ADP constructs an approximation of the optimal "cost-to-go" function, which parameterizes the optimal control policy with respect to the state variable. With the approximate "cost-to-go" function computed, a multistage optimization problem that needs to be solved online at every sample time can be reduced to a single-stage optimization, thereby significantly lessening the real-time computational load. Moreover, stochastic uncertainties can be addressed relatively easily within this framework. Nonetheless, the existing ADP method requires excessive offline computation when applied to a high-dimensional system. A case study of a reactor and a distillation column with recycle was used to illustrate this issue. Then, several ways were proposed to reduce the computational load so that the ADP method can be applied to high-dimensional integrated plants. The results showed that the approach is much more superior to NMPC in both deterministic and stochastic cases. © 2009 American Institute of Chemical Engineers AIChE J, 2009 |
---|---|
ISSN: | 0001-1541 1547-5905 |
DOI: | 10.1002/aic.11805 |