793
Views
3
CrossRef citations to date
0
Altmetric
Reviews

Reinforcement learning applied to wastewater treatment process control optimization: Approaches, challenges, and path forward

ORCID Icon, ORCID Icon, &
Pages 1775-1794 | Published online: 06 Mar 2023

References

  • Agarwal, R., Schuurmans, D., & Norouzi, M. (2020). An optimistic perspective on offline reinforcement learning. Proceedings of the 37th International Conference on Machine Learning.
  • Alex, J., Benedetti, L., Copp, J., Gernaey, K. v., Jeppsson, U., Nopens, I., Pons, M. N., Steyer, J. P., & Vanrolleghem, P. (2008). Benchmark Simulation Model no. 1 (BSM1). Lund University.
  • Åmand, L., Olsson, G., & Carlsson, B. (2013). Aeration control—A review. Water Science and Technology: A Journal of the International Association on Water Pollution Research, 67(11), 2374–2398. https://doi.org/10.2166/wst.2013.139
  • Badia, A. P., Piot, B., Kapturowski, S., Sprechmann, P., Vitvitskyi, A., Guo, D., & Blundell, C. (2020). Agent57: Outperforming the Atari Human Benchmark. http://arxiv.org/abs/2003.13350
  • Bai, C., Wang, L., Yang, Z., Deng, Z., Garg, A., Liu, P., & Wang, Z. (2022). Pessimistic bootstrapping for uncertainty-driven offline reinforcement learning. http://arxiv.org/abs/2202.11566
  • Belia, E., Amerlinck, Y., Benedetti, L., Johnson, B., Sin, G., Vanrolleghem, P. A., Gernaey, K. v., Gillot, S., Neumann, M. B., Rieger, L., Shaw, A., & Villez, K. (2009). Wastewater treatment modelling: Dealing with uncertainties. Water Science and Technology: A Journal of the International Association on Water Pollution Research, 60(8), 1929–1941. https://doi.org/10.2166/wst.2009.225
  • Cao, W., & Yang, Q. (2020). Online sequential extreme learning machine based adaptive control for wastewater treatment plant. Neurocomputing, 408, 169–175. https://doi.org/10.1016/j.neucom.2019.05.109
  • Chen, K., Wang, H., Valverde-Pérez, B., Zhai, S., Vezzaro, L., & Wang, A. (2021). Optimal control towards sustainable wastewater treatment plants based on multi-agent reinforcement learning. Chemosphere, 279, 130498. https://doi.org/10.1016/j.chemosphere.2021.130498
  • Chen, X., Zhong, W., Peng, X., Du, P., & Li, Z. (2022). An improved adaptive dynamic programming algorithm based on fuzzy extended state observer for dissolved oxygen concentration control. Processes, 10(12), 2618. https://doi.org/10.3390/pr10122618
  • Corominas, L., Garrido-Baserba, M., Villez, K., Olsson, G., Cortés, U., & Poch, M. (2018). Transforming data into knowledge for improved wastewater treatment operation: A critical review of techniques. Environmental Modelling & Software, 106, 89–103. https://doi.org/10.1016/j.envsoft.2017.11.023
  • Dalal, G., Dvijotham, K., Vecerik, M., Hester, T., Paduraru, C., & Tassa, Y. (2018). Safe exploration in continuous action spaces. http://arxiv.org/abs/1801.08757
  • Drewnowski, J., Remiszewska-Skwarek, A., Duda, S., & Łagód, G. (2019). Aeration process in bioreactors as the main energy consumer in a wastewater treatment plant. Review of solutions and methods of process optimization. Processes, 7(5), 311. https://doi.org/10.3390/pr7050311
  • Egle, L., Rechberger, H., & Zessner, M. (2015). Overview and description of technologies for recovering phosphorus from municipal wastewater. Resources, Conservation and Recycling, 105, 325–346. https://doi.org/10.1016/j.resconrec.2015.09.016
  • Enes Bilgin. (2020). Mastering reinforcement learning with python. Packt Publishing.
  • Fernandez de Canete, J., del Saz-Orozco, P., Baratti, R., Mulas, M., Ruano, A., & Garcia-Cerezo, A. (2016). Soft-sensing estimation of plant effluent concentrations in a biological wastewater treatment plant using an optimal neural network. Expert Systems with Applications, 63, 8–19. https://doi.org/10.1016/j.eswa.2016.06.028
  • Fernandez-Gauna, B., Osa, J. L., & Graña, M. (2018). Experiments of conditioned reinforcement learning in continuous space control tasks. Neurocomputing, 271, 38–47. https://doi.org/10.1016/j.neucom.2016.08.155
  • Fujimoto, S., Meger, D., & Precup, D. (2018). Off-policy deep reinforcement learning without exploration. http://arxiv.org/abs/1812.02900
  • Fujita, Y., Nagarajan, P., Kataoka, T., & Ishikawa, T. (2021). ChainerRL: A deep reinforcement learning library. Journal of Machine Learning Research, 22, 1–14. https://github.com/chainer/chainerrl.
  • Gao, T., & Jojic, V. (2016). Degrees of freedom in deep neural networks. http://arxiv.org/abs/1603.09260
  • Goulart, D. A., & Pereira, R. D. (2020). Autonomous pH control by reinforcement learning for electroplating industry wastewater. Computers and Chemical Engineering, 140, 106909.
  • Granzoto, M. R., Seabra, I., Malvestiti, J. A., Cristale, J., & Dantas, R. F. (2021). Integration of ozone, UV/H2O2 and GAC in a multi-barrier treatment for secondary effluent polishing: Reuse parameters and micropollutants removal. The Science of the Total Environment, 759, 143498. https://doi.org/10.1016/j.scitotenv.2020.143498
  • Gu, S., Lillicrap, T., Sutskever, I., & Levine, S. (2016). Continuous Deep Q-Learning with Model-based Acceleration. ArXiv
  • Gu, Z., She, C., Hardjawana, W., Lumb, S., McKechnie, D., Essery, T., & Vucetic, B. (2021). Knowledge-assisted deep reinforcement learning in 5G scheduler design: From theoretical framework to implementation. IEEE Journal on Selected Areas in Communications, 39(7), 2014–2028. https://doi.org/10.1109/JSAC.2021.3078498
  • Guilera, J., Andreu, T., Basset, N., Boeltken, T., Timm, F., Mallol, I., & Morante, J. R. (2020). Synthetic natural gas production from biogas in a waste water treatment plant. Renewable Energy. 146, 1301–1308. https://doi.org/10.1016/j.renene.2019.07.044
  • Han, M., Zhang, X., Xu, L., May, R., Pan, S., Wu, J., Fleyeh, H., & Zhang, X. (2018). A review of reinforcement learning methodologies on control systems for building energy. Dalarna University, 1–26.
  • Hernández-del-Olmo, F., & Guadioso, E. (2011). Reinforcement learning techniques for the control of wastewater treatment plants. In J. M. Ferrandez, J. R. Alvarez Sanchez, F. de la Paz, & F. Javier Toledo (Eds.), New Challenges on Bioinspired Applications (pp. 215–222). Springer.
  • Hernández-Del-Olmo, F., Gaudioso, E., & Nevado, A. (2012). Autonomous adaptive and active tuning up of the dissolved oxygen setpoint in a wastewater treatment plant using reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(5), 768–774. https://doi.org/10.1109/TSMCC.2011.2162401
  • Hernández-del-Olmo, F., Gaudioso, E., Dormido, R., & Duro, N. (2016). Energy and environmental efficiency for the N-ammonia removal process in wastewater treatment plants by means of reinforcement learning. Energies, 9(9), 755. https://doi.org/10.3390/en9090755
  • Hernández-del-Olmo, F., Gaudioso, E., Dormido, R., & Duro, N. (2018). Tackling the start-up of a reinforcement learning agent for the control of wastewater treatment plants. Knowledge-Based Systems, 144, 9–15. https://doi.org/10.1016/j.knosys.2017.12.019
  • Hernández-Del-Olmo, F., Llanes, F. H., & Gaudioso, E. (2012). An emergent approach for the control of wastewater treatment plants by means of reinforcement learning techniques. Expert Systems with Applications, 39(3), 2355–2360. https://doi.org/10.1016/j.eswa.2011.08.062
  • Holman, J. B., & Wareham, D. G. (2003). Oxidation-reduction potential as a monitoring tool in a low dissolved oxygen wastewater treatment process. Journal of Environmental Engineering, 126, 52–58. https://doi.org/10.1061/ASCE0733-93722003129:152
  • Huang, X. L., Ma, X., & Hu, F. (2018). Editorial: Machine learning and intelligent communications. Mobile Networks and Applications, 23(1), 68–70. Springer New York LLC. https://doi.org/10.1007/s11036-017-0962-2
  • Hunter, R. G., Day, J. W., Wiegman, A. R., & Lane, R. R. (2019). Municipal wastewater treatment costs with an emphasis on assimilation wetlands in the Louisiana coastal zone. Ecological Engineering, 137, 21–25. https://doi.org/10.1016/j.ecoleng.2018.09.020
  • Hwangbo, S., & Sin, G. (2020). Design of control framework based on deep reinforcement learning and Monte-Carlo sampling in downstream separation. Computers & Chemical Engineering, 140, 106910. https://doi.org/10.1016/j.compchemeng.2020.106910
  • Icke, O., van Es, D. M., de Koning, M. F., Wuister, J. J. G., Ng, J., Phua, K. M., Koh, Y. K. K., Chan, W. J., & Tao, G. (2020). Performance improvement of wastewater treatment processes by application of machine learning. Water Science and Technology: A Journal of the International Association on Water Pollution Research, 82(12), 2671–2680. https://doi.org/10.2166/wst.2020.382
  • Jiang, Y., Yin, S., Dong, J., & Kaynak, O. (2021). A review on soft sensors for monitoring, control, and optimization of industrial processes. IEEE Sensors Journal, 21(11), 12868–12881. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/JSEN.2020.3033153
  • Kacprzak, M., Neczaj, E., Fijałkowski, K., Grobelak, A., Grosser, A., Worwag, M., Rorat, A., Brattebo, H., Almås, Å., & Singh, B. R. (2017). Sewage sludge disposal strategies for sustainable development. Environmental Research, 156, 39–46. https://doi.org/10.1016/j.envres.2017.03.010
  • Keene, N. A., Reusser, S. R., Scarborough, M. J., Grooms, A. L., Seib, M., Santo Domingo, J., & Noguera, D. R. (2017). Pilot plant demonstration of stable and efficient high rate biological nutrient removal with low dissolved oxygen conditions. Water Research, 121, 72–85. https://doi.org/10.1016/j.watres.2017.05.029
  • Klaise, J. (2019, January). Reinforcement learning with policy gradients in pure Python. Wishful Tinkering.
  • Kuhnle, A., Schaarschmidt, M., & Fricke, K. (2017). Tensorforce: A TensorFlow library for applied reinforcement learning. GitHub. https://github.com/tensorforce/tensorforce
  • Lackner, S., Gilbert, E. M., Vlaeminck, S. E., Joss, A., Horn, H., & van Loosdrecht, M. C. M. (2014). Full-scale partial nitritation/anammox experiences—An application survey. Water Research, 55, 292–303. https://doi.org/10.1016/j.watres.2014.02.032
  • Lemar, P., & de Fontaine, A. (2017). Energy data management manual for the wastewater treatment sector. U.S. Department of Energy, 1–36.
  • Levine, S., Kumar, A., Tucker, G., & Fu, J. (2020). Offline reinforcement learning: Tutorial, review, and perspectives on open problems. http://arxiv.org/abs/2005.01643
  • Li, B., Huang, H. M., Boiarkina, I., Yu, W., Huang, Y. F., Wang, G. Q., & Young, B. R. (2019). Phosphorus recovery through struvite crystallisation: Recent developments in the understanding of operational factors. Journal of Environmental Management, 248, 109254. Academic Press. https://doi.org/10.1016/j.jenvman.2019.07.025
  • Lu, L., Zheng, H., Jie, J., Zhang, M., & Dai, R. (2021). Reinforcement learning-based particle swarm optimization for sewage treatment control. Complex & Intelligent Systems, 7(5), 2199–2210. https://doi.org/10.1007/s40747-021-00395-w
  • Malviya, A., & Jaspal, D. (2021). Artificial intelligence as an upcoming technology in wastewater treatment: A comprehensive review. Environmental Technology Reviews, 10(1), 177–187. https://doi.org/10.1080/21622515.2021.1913242
  • Martini, S., & Roni, K. A. (2021). The existing technology and the application of digital artificial intelligent in the wastewater treatment area: A review paper. Journal of Physics: Conference Series, 1858(1), 012013. https://doi.org/10.1088/1742-6596/1858/1/012013
  • Mei, X., Wang, Z., Miao, Y., & Wu, Z. (2016). Recover energy from domestic wastewater using anaerobic membrane bioreactor: Operating parameters optimization and energy balance analysis. Energy, 98, 146–154. https://doi.org/10.1016/j.energy.2016.01.011
  • Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. http://arxiv.org/abs/1312.5602
  • Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533. https://doi.org/10.1038/nature14236
  • Monod, J. (1949). The growth of bacterial cultures. Annual Review of Microbiology, 3(1), 371–394. www.annualreviews.org https://doi.org/10.1146/annurev.mi.03.100149.002103
  • Morocho-Cayamcela, M. E., Lee, H., & Lim, W. (2019). Machine learning for 5G/B5G mobile and wireless communications: Potential, limitations, and future directions. IEEE Access. 7, 137184–137206. https://doi.org/10.1109/ACCESS.2019.2942390
  • Motamedi, M., Sakharnykh, N., & Kaldewey, T. (2021). A Data-Centric Approach for Training Deep Neural Networks with Less Data. ArXiv
  • Nam, K. J., Heo, S. K., Loy-Benitez, J., Ifaei, P., & Yoo, C. K. (2020). An autonomous operational trajectory searching system for an economic and environmental membrane bioreactor plant using deep reinforcement learning. Water Science and Technology: A Journal of the International Association on Water Pollution Research, 81(8), 1578–1587. https://doi.org/10.2166/wst.2020.053
  • Newhart, K. B., Holloway, R. W., Hering, A. S., & Cath, T. Y. (2019). Data-driven performance analyses of wastewater treatment plants: A review. Water Research, 157, 498–513. https://doi.org/10.1016/j.watres.2019.03.030
  • Nguyen, T. T., Nguyen, N. D., & Nahavandi, S. (2020). Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications. IEEE Transactions on Cybernetics, 50(9), 3826–3839. https://doi.org/10.1109/TCYB.2020.2977374
  • Nian, R., Liu, J., & Huang, B. (2020). A review On reinforcement learning: Introduction and applications in industrial process control. Computers & Chemical Engineering, 139, 106886. https://doi.org/10.1016/j.compchemeng.2020.106886
  • Nikita, S., Tiwari, A., Sonawat, D., Kodamana, H., & Rathore, A. S. (2021). Reinforcement learning based optimization of process chromatography for continuous processing of biopharmaceuticals. Chemical Engineering Science, 230, 116171. https://doi.org/10.1016/j.ces.2020.116171
  • Pang, J. W., Yang, S. S., He, L., Chen, Y. d., Cao, G. L., Zhao, L., Wang, X. Y., & Ren, N. Q. (2019). An influent responsive control strategy with machine learning: Q-learning based optimization method for a biological phosphorus removal system. Chemosphere, 234, 893–901. https://doi.org/10.1016/j.chemosphere.2019.06.103
  • Panjapornpon, C., Chinchalongporn, P., Bardeeniz, S., Makkayatorn, R., & Wongpunnawat, W. (2022). Reinforcement learning control with deep deterministic policy gradient algorithm for multivariable pH process. Processes, 10(12), 2514. https://doi.org/10.3390/pr10122514
  • Panzer, M., & Bender, B. (2022). Deep reinforcement learning in production systems: A systematic literature review. International Journal of Production Research, 60(13), 4316–4341. https://doi.org/10.1080/00207543.2021.1973138
  • Petersen, B., Gernaey, K., Henze, M., & Vanrolleghem, P. A. (2002). Evaluation of an ASM1 model calibration procedure on a municipal-industrial wastewater treatment plant. Journal of Hydroinformatics, 4(1), 15–38. http://iwaponline.com/jh/article-pdf/4/1/15/392373/15.pdf?casa_token=V_KEGHAAOA8AAAAA:lQE7cumHezKkNN4FSmfDyZcJb0RtMRT8kL8-vXEP8Z38iW3oQwmDrMWPxLtxaMR1BA-Vw2U https://doi.org/10.2166/hydro.2002.0003
  • Plappert, M. (2016). keras-rl. GitHub. https://github.com/keras-rl/keras-rl
  • Puyol, D., Batstone, D. J., Hülsen, T., Astals, S., Peces, M., & Krömer, J. O. (2017). Resource recovery from wastewater by biological technologies: Opportunities, challenges, and prospects. Frontiers in Microbiology, 7, Issue JAN). Frontiers Media S.A. https://doi.org/10.3389/fmicb.2016.02106
  • Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M., & Dormann, N. (2021). Stable-baselines3: Reliable reinforcement learning implementations. Journal of Machine Learning Research, 22, 1–8. https://github.com/DLR-RM/stable-baselines3.
  • Rao, A., & Jelvis, T. (2022). Foundations of reinforcement learning with applications in finance. Chapman and Hall.
  • Rosso, D., Larson, L. E., & Stenstrom, M. K. (2008). Aeration of large-scale municipal wastewater treatment plants: State of the art. Water Science and Technology: A Journal of the International Association on Water Pollution Research, 57(7), 973–978. https://doi.org/10.2166/wst.2008.218
  • Schraa, O., Rieger, L., Miletić, I., & Alex, J. (2019). Ammonia-based aeration control with optimal SRT control: Improved performance and lower energy consumption. Water Science and Technology: A Journal of the International Association on Water Pollution Research, 79(1), 63–72. https://doi.org/10.2166/wst.2019.032
  • Schrittwieser, J., Hubert, T., Mandhane, A., Barekatain, M., Antonoglou, I., & Silver, D. (2021). Online and offline reinforcement learning by planning with a learned model. Advances in Neural Information Processing Systems, 34, 27580–91.
  • Shen, W., Chen, X., Pons, M. N., & Corriou, J. P. (2009). Model predictive control for wastewater treatment process with feedforward compensation. Chemical Engineering Journal, 155(1-2), 161–174. https://doi.org/10.1016/j.cej.2009.07.039
  • Shin, J., Badgwell, T. A., Liu, K. H., & Lee, J. H. (2019). Reinforcement learning – Overview of recent progress and implications for process control. Computers & Chemical Engineering, 127, 282–294. https://doi.org/10.1016/j.compchemeng.2019.05.029
  • Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489. https://doi.org/10.1038/nature16961
  • Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science (New York, N.Y.), 362(6419), 1140–1144. https://www.science.org https://doi.org/10.1126/science.aar6404
  • Sin, G., van Hulle, S. W. H., de Pauw, D. J. W., van Griensven, A., & Vanrolleghem, P. A. (2005). A critical comparison of systematic calibration protocols for activated sludge models: A SWOT analysis. Water Research, 39(12), 2459–2474. Elsevier Ltd. https://doi.org/10.1016/j.watres.2005.05.006
  • Srinivasan, K., Eysenbach, B., Ha, S., Tan, J., & Finn, C. (2020). Learning to be safe: Deep RL with a safety critic. http://arxiv.org/abs/2010.14603
  • Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. The MIT Press.
  • Syafiie, S., Tadeo, F., Martinez, E., & Alvarez, T. (2011). Model-free control based on reinforcement learning for a wastewater treatment problem. Applied Soft Computing, 11(1), 73–82. https://doi.org/10.1016/j.asoc.2009.10.018
  • Tang, C. Y., Yang, Z., Guo, H., Wen, J. J., Nghiem, L. D., & Cornelissen, E. (2018). Potable Water Reuse through Advanced Membrane Technology. Environmental Science & Technology, 52(18), 10215–10223. https://doi.org/10.1021/acs.est.8b00562
  • Valero, O. J. P. (2021). Application of reinforcement deep learning techniques for energy optimization in wastewater treatment plants through intelligent control of the nitrogen removal process. UNIVERSIDAD NACIONAL DE EDUCACIÓN A DISTANCIAESCUELA TÉCNICA SUPERIOR DE INGENIERÍA INFORMÁTICA.
  • Wang, G., Jia, Q. S., Zhou, M. C., Bi, J., Qiao, J., & Abusorrah, A. (2022). Artificial neural networks for water quality soft-sensing in wastewater treatment: A review. Artificial Intelligence Review, 55(1), 565–587. https://doi.org/10.1007/s10462-021-10038-8
  • Xiong, Z., Zhang, Y., Niyato, D., Deng, R., Wang, P., & Wang, L. C. (2019). Deep reinforcement learning for mobile 5G and beyond: Fundamentals, applications, and challenges. IEEE Vehicular Technology Magazine, 14(2), 44–52. https://doi.org/10.1109/MVT.2019.2903655
  • Xu, Z., Xu, J., Yin, H., Jin, W., Li, H., & He, Z. (2019). Urban river pollution control in developing countries. Nature Sustainability, 2019 2:3, 2(3), 158–160. https://doi.org/10.1038/s41893-019-0249-7
  • Yang, H., Ma, F., Cui, F., & Zhong, Y. (2004). A new multi-agent reinforcement learning algorithm and its application in wastewater reclamation by IBAC reactor. Proceedings of the World Congress on Intelligent Control and Automation (WCICA), 3, 2671–2675. https://doi.org/10.1109/wcica.2004.1342082
  • Yang, Q., Cao, W., Meng, W., & Si, J. (2022). Reinforcement-learning-based tracking control of waste water treatment process under realistic system conditions and control performance requirements. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 52(8), 5284–5294. https://doi.org/10.1109/TSMC.2021.3122802
  • Yang, R., Wang, D., & Qiao, J. (2022). Policy gradient adaptive critic design with dynamic prioritized experience replay for wastewater treatment process control. IEEE Transactions on Industrial Informatics, 18(5), 3150–3158. https://doi.org/10.1109/TII.2021.3106402
  • Yang, T., Qiu, W., Ma, Y., Chadli, M., & Zhang, L. (2014). Fuzzy model-based predictive control of dissolved oxygen in activated sludge processes. Neurocomputing, 136, 88–95. https://doi.org/10.1016/j.neucom.2014.01.025
  • Zadorojniy, A., Wasserkrug, S., Zeltyn, S., & Lipets, V. (2019). Unleashing analytics to reduce costs and improve quality in wastewater treatment. INFORMS Journal on Applied Analytics, 49(4), 262–268. https://doi.org/10.1287/inte.2019.0990
  • Zhang, D., Han, X., & Deng, C, China Electric Power Research Institute. (2018). Review on the research and practice of deep learning and reinforcement learning in smart grids. CSEE Journal of Power and Energy Systems, 4(3), 362–370. https://doi.org/10.17775/CSEEJPES.2018.00520
  • Zhao, L., Dai, T., Qiao, Z., Sun, P., Hao, J., & Yang, Y. (2020). Application of artificial intelligence to wastewater treatment: A bibliometric analysis and systematic review of technology, economy, management, and wastewater reuse. Process Safety and Environmental Protection, 133, 169–182. Institution of Chemical Engineers. https://doi.org/10.1016/j.psep.2019.11.014
  • Zhou, P., Wang, X., & Chai, T. (2022). Multiobjective operation optimization of wastewater treatment process based on reinforcement self-learning and knowledge guidance. IEEE Transactions on Cybernetics, 1–14. https://doi.org/10.1109/TCYB.2022.3164476
  • Zhuang, Z., Sun, Z., Cheng, Y., Yao, R., & Zhang, W. (2018). Modeling and optimization of paper-making wastewater treatment based on reinforcement learning [Paper presentation]. 2018 37th Chinese Control Conference (CCC). https://doi.org/10.23919/ChiCC.2018.8482733

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.