647
Views
0
CrossRef citations to date
0
Altmetric
Articles

Rule reduction for control of a building cooling system using explainable AI

&
Pages 832-847 | Received 06 May 2022, Accepted 14 Jul 2022, Published online: 04 Aug 2022

References

  • Afram, A., and F. Janabi-Sharifi. 2014. “Theory and Applications of HVAC Control Systems – A Review of Model Predictive Control (MPC).” Building and Environment 72: 343–355. doi:10.1016/j.buildenv.2013.11.016.
  • Ahn, K. U., D. K. Kim, Y. J. Kim, S. H. Yoon, and C. S. Park. 2016. “Issues to Be Solved for Energy Simulation of An Existing Office Building.” Sustainability 8: 345. doi:10.3390/su8040345.
  • Ahn, K. U., and C. S. Park. 2020. “Application of Deep Q-Networks for Model-Free Optimal Control Balancing Between Different HVAC Systems.” Science and Technology for the Built Environment 26: 61–74.
  • Alharin, A., T. N. Doan, and A. M. Sartipi. 2020. “Reinforcement Learning Interpretation Methods: A Survey.” IEEE Access 8: 171058–171077. doi:10.1109/ACCESS.2020.3023394.
  • American Society of Heating, Ventilating, and Air Conditioning Engineers (ASHRAE). 2002. Guideline 14-2002 Measurement of Energy and Demand Savings, Technical Report. Atlanta, GA: American Society of Heating, Ventilating, and Air Conditioning Engineers.
  • Bastani, O., Y. Pu, and A. Solar-Lezama. 2018. “Verifiable Reinforcement Learning via Policy Extraction.” Proceedings of 32nd Conference on Neural Information Processing Systems, Montreal, Canada, arXiv:1805.08328v2.
  • Bucila, C., R. Caruana, and A. Niculescu-Mizil. 2006. “Model Compression.” Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, Pennsylvania.
  • Cacabelos, A., P. Eguia, L. Febrero, and E. Granada. 2017. “Development of a New Multi-stage Building Energy Model Calibration Methodology and Validation in a Public Library.” Energy and Buildings 146: 182–199.
  • Chen, Y., L. K. Norford, H. W. Samuelson, and A. Malkawi. 2018. “Optimal Control of HVAC and Window Systems for Natural Ventilation Through Reinforcement Learning.” Energy and Buildings 169: 195–205.
  • Coppens, Y., K. Efthymiadis, T. Lenaerts, and A. Nowe. 2019. “Distilling Deep Reinforcement Learning Policies in Soft Decision Trees.” In Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence, edited by T. Miller, R. Weber, and D. Magazzeni, 1–6. Cotai, Macao: IJCAI 2019 Workshop on Explainable Artificial Intelligence.
  • Ding, X., W. Du, and A. E. Cerpa. 2020. “MB2C: Model-Based Deep Reinforcement Learning for Multi-Zone Building Control.” In Proceedings of the 7th ACM International Conference on Systems for Energy–Efficient Buildings, Cities, and Transportation (BuildSys '20), 50–59. New York: Association for Computing Machinery.
  • Ding, Z., P. Hernandez-Leal, G. W. Ding, C. Li, and R. Huang. 2021. CDT: Cascading Decision Trees for Explainable Reinforcement Learning, arXiv:2011.07553v2.
  • DOE. 2015. Chapter 5: Increasing Efficiency of Building Systems and Technologies. QUADRENNIAL TECHNOLOGY REVIEW: An Assessment of Energy Technologies and Research Opportunities.
  • DOE. 2020. EnergyPlus 9.3 Engineering Reference: The Encyclopedic Reference to EnergyPlus Calculations. U.S. Department of Energy.
  • Došilović, F. K., M. Brčić, and N. Hlupić. 2018. “Explainable Artificial Intelligence: A Survey.” In Proceedings of 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 210–215. IEEE. doi:10.23919/MIPRO.2018.8400040.
  • Dulac-Arnold, G., D. Mankowitz, and T. Hester. 2019. “Challenges of Real-World Reinforcement Learning.” Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, PMLR 97.
  • Finney, S., N. H. Gardiol, L. P. Kaelbling, and T. Oates. 2002. “The Thing That We Tried Didn’t Work Very Well: Deictic Representation in Reinforcement Learning.” Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence, Albert, Canada, arXiv:1301.0567v1.
  • Gunning, D., E. Vorm, J. Y. Wang, and M. Turek. 2021. “DARPA’s Explainable AI (XAI) Program: A Retrospective.” Applied AI Letters 2 (4), doi:10.1002/ail2.61.
  • Hong, T., J. Langevin, and K. Sun. 2018. “Building Simulation: Ten Challenges.” Building Simulation 11 (5): 871–898.
  • Lipton, Z. C. 2017. The Mythos of Interpretability, arXiv:1606.03490v3.
  • Madmual, P., T. Miller, T. Sonenberg, and F. Vetere. 2020. Distal Explanations for Model-free Explainable Reinforcement Learning, arXiv:2001.10284v2.
  • Mnih, V., K. Kavukcuoglu, D. Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, et al. 2015. “Human-level Control Through Deep Reinforcement Learning.” Nature 518: 529–533.
  • Myles, A. J., R. N. Feudale, Y. Liu, N. A. Woody, and S. D. Brown. 2004. “An Introduction to Decision Tree Modeling.” Journal of Chemometrics 18 (6): 275–285.
  • Nagarathinam, S., V. Menon, A. Vasan, and A. Sivasubramaniam. 2020. “MARCO – Multi-Agent Reinforcement Learning Based Control of Building HVAC Systems.” In Proceedings of 11th ACM International Conference of Future Energy Systems (e-Energy’20), Virtual Event, 57–67. New York: Association for Computing Machinery.
  • Roth, A. M., N. Topin, P. Jamshidi, and M. Veloso. 2019. Conservative Q-Improvement: Reinforcement Learning for an Interpretable Decision-Tree Policy, arXiv:1907.01180v1.
  • Silver, D., A. Huang, C. J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, et al. 2016. “Mastering the Game of Go with Deep Neural Networks and Tree Search.” Nature 529: 484–489.
  • Sutton, R. S., and A. G. Barto. 1998. Reinforcement Learning an Introduction, a Bradford Book. London: MIT Press.
  • Wang, Z., and T. Hong. 2020. “Reinforcement Learning for Building Controls: The Opportunities and Challenges.” Applied Energy 269: 115036.
  • Yang, L., Z. Nagy, P. Goffin, and A. Schlueter. 2015. “Reinforcement Learning for Optimal Control of low Exergy Buildings.” Applied Energy 156: 577–586.
  • Zhang, Z., A. Chong, Y. Pan, C. Zhang, and K. P. Lam. 2019. “Whole Building Energy Model for HVAC Optimal Control: A Practical Framework Based on Deep Reinforcement Learning.” Energy and Buildings 199: 472–490.
  • Zhao, H., J. Zhao, T. Shu, and Z. Pan. 2021. “Hybrid-Model-Based Deep Reinforcement Learning for Heating, Ventilation, and Air-conditioning Control.” Frontiers in Energy Research 8: 610518. doi:10.3389/fenrg.2020.610518.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.