348
Views
4
CrossRef citations to date
0
Altmetric
Articles

Trade-off decisions in a novel deep reinforcement learning for energy savings in HVAC systems

, ORCID Icon & ORCID Icon
Pages 809-831 | Received 01 Nov 2021, Accepted 30 Jun 2022, Published online: 04 Aug 2022
 

Abstract

This paper presents Model-based Reinforcement Learning (MB-RL) techniques to control the indoor air temperature, and CO2 concentration level, and minimize the energy consumption of the heating, ventilating, and air conditioning (HVAC) systems, simultaneously. For this purpose, a trade-off is made between maintaining indoor comfort levels and minimizing energy consumption. The control of the HVAC system is performed using the Deterministic Policy RL (DP-RL) method. Moreover, the nonlinear autoregressive exogenous neural network (NARX-NN) is employed as an approximation function with DP-RL method to provide a hybrid DP-NARX-RL controller. By applying the DP-RL and DP-NARX-RL controllers to the HVAC system of a typical building, parameters such as the indoor comfort levels, the electrical power, and energy consumed, and the energy costs at various pricing schemes are evaluated for two case studies. In both cases, the results show the better performance of DP-NARX-RL compared to DP-RL, RL, and PID controllers.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.