Development of a Bias Compensating Q-Learning Controller for a Multi-Zone HVAC Facility

We present the development of a bias compensating reinforcement learning (RL) algorithm that optimizes thermal comfort (by minimizing tracking error) and control utilization (by penalizing setpoint deviations) in a multi-zone heating, ventilation, and air-conditioning (HVAC) lab facility subject to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/CAA journal of automatica sinica 2023-08, Vol.10 (8), p.1704-1715
Hauptverfasser: Asad Rizvi, Syed Ali, Pertzborn, Amanda J., Lin, Zongli
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We present the development of a bias compensating reinforcement learning (RL) algorithm that optimizes thermal comfort (by minimizing tracking error) and control utilization (by penalizing setpoint deviations) in a multi-zone heating, ventilation, and air-conditioning (HVAC) lab facility subject to unmeasurable disturbances and unknown dynamics. It is shown that the presence of unmeasurable disturbance results in an inconsistent learning equation in traditional RL controllers leading to parameter estimation bias (even with integral action support), and in the extreme case, the divergence of the learning algorithm. We demonstrate this issue by applying the popular Q-learning algorithm to linear quadratic regulation (LQR) of a multi-zone HVAC environment and showing that, even with integral support, the algorithm exhibits bias issue during the learning phase when the HVAC disturbance is unmeasurable due to unknown heat gains, occupancy variations, light sources, and outside weather changes. To address this difficulty, we present a bias compensating learning equation that learns a lumped bias term as a result of disturbances (and possibly other sources) in conjunction with the optimal control parameters. Experimental results show that the proposed scheme not only recovers the bias-free optimal control parameters but it does so without explicitly learning the dynamic model or estimating the disturbances, demonstrating the effectiveness of the algorithm in addressing the above challenges.
ISSN:2329-9266
2329-9274
DOI:10.1109/JAS.2023.123624