244
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Robustness enhancement of DRL controller for DC–DC buck convertersfusing ESO

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 09 Nov 2022, Accepted 07 Apr 2023, Published online: 25 Apr 2023

References

  • Bandyopadhyay, I., Purkait, P., & Koley, C. (2018). Performance of a classifier based on time-domain features for incipient fault detection in inverter drives. IEEE Transactions on Industrial Informatics, 15(1), 3–14. https://doi.org/10.1109/TII.2018.2854885
  • Cao, D., Hu, W., Zhao, J., Zhang, G., Zhang, B., Liu, Z., & Blaabjerg, F. (2020). Reinforcement learning and its applications in modern power and energy systems: A review. Journal of Modern Power Systems and Clean Energy, 8(6), 1029–1042. https://doi.org/10.35833/MPCE.2020.000552
  • Coggan, M. (2004). Exploration and exploitation in reinforcement learning. Research supervised by Prof. Doina Precup, CRA-W DMP Project at McGill University.
  • Cui, C., Yan, N., Huangfu, B., Yang, T., & Zhang, C. (2021). Voltage regulation of DC-DC buck converters feeding CPLs via deep reinforcement learning. IEEE Transactions on Circuits and Systems II: Express Briefs, 69(3), 1777–1781. https://doi.org/10.1109/TCSII.2021.3107535
  • Cui, C., Yang, T., Dai, Y., Zhang, C., & Xu, Q. (2022). Implementation of transferring reinforcement learning for DC-DC buck converter control via duty ratio mapping. IEEE Transactions on Industrial Electronics, 70(6), 1–10. https://doi.org/10.1109/TIE.2022.3192676
  • Davoudi, A., Jatskevich, J., & De Rybel, T. (2006). Numerical state-space average-value modeling of PWM DC-DC converters operating in DCM and CCM. IEEE Transactions on Power Electronics, 21(4), 1003–1012. https://doi.org/10.1109/TPEL.2006.876848
  • El Mejdoubi, A., Chaoui, H., Sabor, J., & Gualous, H. (2017). Remaining useful life prognosis of supercapacitors under temperature and voltage aging conditions. IEEE Transactions on Industrial Electronics, 65(5), 4357–4367. https://doi.org/10.1109/TIE.2017.2767550
  • Fan, J., Wang, Z., Xie, Y., & Yang, Z. (2020). A theoretical analysis of deep Q-learning. In Learning for dynamics and control (pp. 486–489).
  • Gheisarnejad, M., Farsizadeh, H., & Khooban, M. H. (2020). A novel nonlinear deep reinforcement learning controller for DC–DC power buck converters. IEEE Transactions on Industrial Electronics, 68(8), 6849–6858. https://doi.org/10.1109/TIE.2020.3005071
  • Hajihosseini, M., Andalibi, M., Gheisarnejad, M., Farsizadeh, H., & Khooban, M. H. (2020). DC/DC power converter control-Based deep machine learning techniques: Real-Time implementation. IEEE Transactions on Power Electronics, 35(10), 9971–9977. https://doi.org/10.1109/TPEL.63
  • Han, J. (2009). From PID to active disturbance rejection control. IEEE Transactions on Industrial Electronics, 56(3), 900–906. https://doi.org/10.1109/TIE.2008.2011621
  • Huangfu, B., Cui, C., Zhang, C., & Xu, L. (2022). Learning-Based optimal large-Signal stabilization for DC/DC boost converters feeding CPLs via deep reinforcement learning. IEEE Journal of Emerging and Selected Topics in Power Electronics, 1–1. https://doi.org/10.1109/JESTPE.2022.3189078
  • Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4, 237–285. https://doi.org/10.1613/jair.301
  • Kumar, A., Zhou, A., Tucker, G., & Levine, S. (2020). Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33, 1179–1191.
  • Kwasinski, A., & Onwuchekwa, C. N. (2010). Dynamic behavior and stabilization of DC microgrids with instantaneous constant-power loads. IEEE Transactions on Power Electronics, 26(3), 822–834. https://doi.org/10.1109/TPEL.2010.2091285
  • Li, X., Qi, G., Guo, X., Chen, Z., & Zhao, X. (2022). Improved high order differential feedback control of quadrotor UAV based on improved extended state observer. Journal of the Franklin Institute, 359(9), 4233–4259. https://doi.org/10.1016/j.jfranklin.2022.03.019
  • Lin, D., Li, X., Ding, S., Wen, H., Du, Y., & Xiao, W. (2021). Self-tuning MPPT scheme based on reinforcement learning and beta parameter in photovoltaic power systems. IEEE Transactions on Power Electronics, 36(12), 13826–13838. https://doi.org/10.1109/TPEL.2021.3089707
  • Peng, Q., Jiang, Q., Yang, Y., Liu, T., Wang, H., & Blaabjerg, F. (2019). On the stability of power electronics-dominated systems: Challenges and potential solutions. IEEE Transactions on Industry Applications, 55(6), 7657–7670. https://doi.org/10.1109/TIA.28
  • Prag, K., Woolway, M., & Celik, T. (2021). Data-driven model predictive control of DC-to-DC buck-boost converter. IEEE Access, 9, 101902–101915. https://doi.org/10.1109/ACCESS.2021.3098169
  • Qi, G., Li, X., & Chen, Z. (2021). Problems of extended state observer and proposal of compensation function observer for unknown model and application in UAV. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 52(5), 2899–2910. https://doi.org/10.1109/TSMC.2021.3054790
  • Ripley, B. D. (1993). Statistical aspects of neural networks. Networks and Chaos–statistical and Probabilistic Aspects, 50, 40–123. https://doi.org/10.1007/978-1-4899-3099-6
  • Wang, H., Li, C., Li, J., He, X., & Huang, T. (2019). A survey on distributed optimisation approaches and applications in smart grids. Journal of Control and Decision, 6(1), 41–60. https://doi.org/10.1080/23307706.2018.1549516
  • Xu, Q., Zhang, C., Wen, C., & Wang, P. (2017). A novel composite nonlinear controller for stabilization of constant power load in DC microgrid. IEEE Transactions on Smart Grid, 10(1), 752–761. https://doi.org/10.1109/TSG.2017.2751755
  • Xu, W., Junejo, A. K., Liu, Y., & Islam, M. R. (2019). Improved continuous fast terminal sliding mode control with extended state observer for speed regulation of PMSM drive system. IEEE Transactions on Vehicular Technology, 68(11), 10465–10476. https://doi.org/10.1109/TVT.25
  • Yang, T., Cui, C., & Zhang, C. (2022). On the Robustness Enhancement of DRL Controller for DC-DC Converters in Practical Applications. In IEEE 17th International Conference on Control & Automation (ICCA) (pp. 225–230).
  • Yu, Y. (2018). Towards Sample Efficient Reinforcement Learning. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence (IJCAI-18) (pp. 5739–5743).
  • Zhang, C., Wang, X., Lin, P., Liu, P. X., Yan, Y., & Yang, J. (2019). Finite-time feedforward decoupling and precise decentralized control for DC microgrids towards large-signal stability. IEEE Transactions on Smart Grid, 11(1), 391–402. https://doi.org/10.1109/TSG.5165411
  • Zhang, Y., Jin, J., & Huang, L. (2020). Model-free predictive current control of PMSM drives based on extended state observer using ultralocal model. IEEE Transactions on Industrial Electronics, 68(2), 993–1003. https://doi.org/10.1109/TIE.41
  • Zhao, W., Queralta, J. P., & Westerlund, T. (2020). Sim-to-real transfer in deep reinforcement learning for robotics: a survey. In IEEE Symposium Series on Computational Intelligence (pp. 737–744).
  • Zheng, Q., Gaol, L. Q., & Gao, Z. (2007). On stability analysis of active disturbance rejection control for nonlinear time-varying plants with unknown dynamics. In 46th IEEE Conference on Decision and Control (pp. 3501–3506).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.