196
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

Input-output data based tracking control under DoS attacks

, , , &
Pages 1627-1637 | Received 22 Oct 2022, Accepted 24 May 2023, Published online: 12 Jun 2023

References

  • Akaber, P., Moussa, B., Ghafouri, M., Atallah, R., Agba, B. L., Assi, C., & Debbabi, M. (2019). Cases: concurrent contingency analysis-based security metric deployment for the smart grid. IEEE Transactions on Smart Grid, 11, 2676–2687. https://doi.org/10.1109/TSG.5165411
  • Al-Tamimi, A., Lewis, F. L., & Abu-Khalaf, M. (2007). Model-free Q-learning designs for linear discrete-time zero-sum games with application to H∞ control. Automatica, 43, 473–481. https://doi.org/10.1016/j.automatica.2006.09.019
  • Cheng, F., Niu, B., Zhang, L., & Chen, Z. (2022). Prescribed performance-based low-computation adaptive tracking control for uncertain nonlinear systems with periodic disturbances. IEEE Transactions on Circuits and Systems II: Express Briefs, 69(11), 4414–4418. https://doi.org/10.1109/TCSII.2022.3181190.
  • Conti, J. P. (2010). The day the samba stopped [power blackouts]. Engineering & Technology, 5, 46–47. https://doi.org/10.1049/et.2010.0410
  • Duo, W., Zhou, M., & Abusorrah, A. (2022). A survey of cyber attacks on cyber physical systems: recent advances and challenges. IEEE/CAA Journal of Automatica Sinica, 9, 784–800. https://doi.org/10.1109/JAS.2022.105548
  • Eslami, A., Abdollahi, F., & Khorasani, K. (2022). Stochastic fault and cyber-attack detection and consensus control in multi-agent systems. International Journal of Control, 95, 2379–2397. https://doi.org/10.1080/00207179.2021.1912394
  • Farwell, J. P., & Rohozinski, R. (2011). Stuxnet and the future of cyber war. Survival, 53, 23–40. https://doi.org/10.1080/00396338.2011.555586
  • Gao, Y., Sun, G., Liu, J., Shi, Y., & Wu, L. (2020). State estimation and self-triggered control of cpss against joint sensor and actuator attacks. Automatica, 113, 108687. https://doi.org/10.1016/j.automatica.2019.108687
  • He, W., Xu, W., Ge, X., Han, Q.-L., Du, W., & Qian, F. (2021). Secure control of multi-agent systems against malicious attacks: a brief survey. IEEE Transactions on Industrial Informatics, 18, 3595–3608. https://doi.org/10.1109/TII.2021.3126644
  • Hewer, G. (1971). An iterative technique for the computation of the steady state gains for the discrete optimal regulator. IEEE Transactions on Automatic Control, 16, 382–384. https://doi.org/10.1109/TAC.1971.1099755
  • Hu, Z., Liu, S., Luo, W., & Wu, L. (2020). Resilient distributed fuzzy load frequency regulation for power systems under cross-layer random denial-of-service attacks. IEEE Transactions on Cybernetics, 52, 2396–2406. https://doi.org/10.1109/TCYB.2020.3005283
  • Jiang, Y., & Jiang, Z.-P. (2012). Computational adaptive optimal control for continuous-time linear systems with completely unknown dynamics. Automatica, 48, 2699–2704. https://doi.org/10.1016/j.automatica.2012.06.096
  • Kazemi, Z., Safavi, A. A., Arefi, M. M., & Naseri, F. (2021). Finite-time secure dynamic state estimation for cyber-physical systems under unknown inputs and sensor attacks. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 52, 4950–4959. https://doi.org/10.1109/TSMC.2021.3106228
  • Kiumarsi, B., Lewis, F. L., Modares, H., Karimpour, A., & Naghibi-Sistani, M.-B. (2014). Reinforcement Q-learning for optimal tracking control of linear discrete-time systems with unknown dynamics. Automatica, 50, 1167–1175. https://doi.org/10.1016/j.automatica.2014.02.015
  • Kiumarsi, B., Lewis, F. L., Naghibi-Sistani, M.-B., & Karimpour, A. (2015). Optimal tracking control of unknown discrete-time linear systems using input-output measured data. IEEE Transactions on Cybernetics, 45, 2770–2779. https://doi.org/10.1109/TCYB.2014.2384016
  • Lancaster, P., & Rodman, L (1995). Algebraic riccati equations. Clarendon Press.
  • Lee, R. M., Assante, M. J., & Conway, T. (2014). German steel mill cyber attack. Industrial Control Systems, 30, 1–15. https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/Lageberichte/Lagebericht2014.pdf?__blob=publicationFile.
  • Leong, A. S., Ramaswamy, A., Quevedo, D. E., Karl, H., & Shi, L. (2020). Deep reinforcement learning for wireless sensor scheduling in cyber–physical systems. Automatica, 113, 108759. https://doi.org/10.1016/j.automatica.2019.108759
  • Lewis, F. L., & Vamvoudakis, K. G. (2010). Reinforcement learning for partially observable dynamic processes: adaptive dynamic programming using measured output data. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 41, 14–25. https://doi.org/10.1109/TSMCB.2010.2043839
  • Lewis, F. L., & Vrabie, D. (2009). Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits and Systems Magazine, 9, 32–50. https://doi.org/10.1109/MCAS.7384
  • Lewis, F. L., Vrabie, D., & Syrmos, V. L (2012). Optimal control. John Wiley & Sons.
  • Li, T., Chen, B., Yu, L., & Zhang, W.-A. (2020). Active security control approach against DoS attacks in cyber-physical systems. IEEE Transactions on Automatic Control, 66, 4303–4310. https://doi.org/10.1109/TAC.2020.3032598
  • Li, X.-J., Yan, J.-J., & Yang, G.-H. (2018). Adaptive fault estimation for T–S fuzzy interconnected systems based on persistent excitation condition via reference signals. IEEE Transactions on Cybernetics, 49, 2822–2834. https://doi.org/10.1109/TCYB.6221036
  • Liu, S., Li, S., & Xu, B. (2020). Event-triggered resilient control for cyber-physical system under denial-of-service attacks. International Journal of Control, 93, 1907–1919. https://doi.org/10.1080/00207179.2018.1537518
  • Liu, S., Niu, B., Zong, G., Zhao, X., & Xu, N. (2022). Data-driven-based event-triggered optimal control of unknown nonlinear systems with input constraints. Nonlinear Dynamics, 109, 891–909. https://doi.org/10.1007/s11071-022-07459-7
  • Liu, Y., & Yang, G.-H. (2020). Event-triggered distributed state estimation for cyber-physical systems under DoS attacks. IEEE Transactions on Cybernetics, 52, 3620–3631. https://doi.org/10.1109/TCYB.2020.3015507
  • Lu, Y., Wu, C., Yao, W., Sun, G., Liu, J., & Wu, L. (2022). Deep reinforcement learning control of fully-constrained cable-driven parallel robots. IEEE Transactions on Industrial Electronics, 70, 7194–7204. https://doi.org/10.1109/TIE.2022.3203763
  • Luo, B., Liu, D., Huang, T., & Wang, D. (2016). Model-free optimal tracking control via critic-only Q-learning. IEEE Transactions on Neural Networks and Learning Systems, 27, 2134–2144. https://doi.org/10.1109/TNNLS.2016.2585520
  • Ma, R., Shi, P., & Wu, L. (2020). Dissipativity-based sliding-mode control of cyber-physical systems under denial-of-service attacks. IEEE Transactions on Cybernetics, 51, 2306–2318. https://doi.org/10.1109/TCYB.2020.2975089
  • Mahmoud, M. S., & Hamdan, M. M. (2022). Stabilization of distributed cyber physical systems subject to denial-of-service attack. International Journal of Control, 95, 692–702. https://doi.org/10.1080/00207179.2020.1813908
  • Mousavinejad, E., Ge, X., Han, Q.-L., Yang, F., & Vlacic, L. (2019). Resilient tracking control of networked control systems under cyber attacks. IEEE Transactions on Cybernetics, 51, 2107–2119. https://doi.org/10.1109/TCYB.2019.2948427
  • On, S. S. I. (1985). Digital communications. Van Nostrand Reinhold.
  • Pang, B., Bian, T., & Jiang, Z.-P. (2021). Robust policy iteration for continuous-time linear quadratic regulation. IEEE Transactions on Automatic Control, 67, 504–511. https://doi.org/10.1109/TAC.2021.3085510
  • Ren, H., Wang, Y., Liu, M., & Li, H. (2022). An optimal estimation framework of multi-agent systems with random transport protocol. IEEE Transactions on Signal Processing, 70, 2548–2559. https://doi.org/10.1109/TSP.2022.3175020
  • Sadamoto, T., & Chakrabortty, A. (2020). Fast real-time reinforcement learning for partially-observable large-scale systems. IEEE Transactions on Artificial Intelligence, 1, 206–218. https://doi.org/10.1109/TAI.2021.3058228
  • Shi, Y., Huang, J., & Yu, B. (2012). Robust tracking control of networked control systems: application to a networked dc motor. IEEE Transactions on Industrial Electronics, 60, 5864–5874. https://doi.org/10.1109/TIE.2012.2233692
  • Tang, Y., Zhang, D., Ho, D. W., Yang, W., & Wang, B. (2018). Event-based tracking control of mobile robot with denial-of-service attacks. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50, 3300–3310. https://doi.org/10.1109/TSMC.6221021
  • Tooranjipour, P., & Kiumarsi, B. (2021). Output feedback H∞ control of unknown discrete-time linear systems: off-policy reinforcement learning. In 2021 60th IEEE Conference on Decision and Control (CDC) (pp. 2264–2269). IEEE.
  • Vrabie, D., Pastravanu, O., Abu-Khalaf, M., & Lewis, F. L. (2009). Adaptive optimal control for continuous-time linear systems based on policy iteration. Automatica, 45, 477–484. https://doi.org/10.1016/j.automatica.2008.08.017
  • Werbos, P. J., Miller, W., & Sutton, R. (1990). A menu of designs for reinforcement learning over time. In Neural Networks for Control (Vol. 3, pp. 67–95). MIT Press.
  • Wu, C., Li, X., Pan, W., Liu, J., & Wu, L. (2020). Zero-sum game-based optimal secure control under actuator attacks. IEEE Transactions on Automatic Control, 66, 3773–3780. https://doi.org/10.1109/TAC.2020.3029342
  • Wu, C., Pan, W., Staa, R., Liu, J., Sun, G., & Wu, L. (2023). Deep reinforcement learning control approach to mitigating actuator attacks. Automatica, 152, 110999. https://doi.org/10.1016/j.automatica.2023.110999
  • Wu, C., Pan, W., Sun, G., Liu, J., & Wu, L. (2021). Learning tracking control for cyber–physical systems. IEEE Internet of Things Journal, 8, 9151–9163. https://doi.org/10.1109/JIOT.2021.3056633
  • Wu, C., Wu, L., Liu, J., & Jiang, Z.-P. (2019). Active defense-based resilient sliding mode control under denial-of-service attacks. IEEE Transactions on Information Forensics and Security, 15, 237–249. https://doi.org/10.1109/TIFS.10206
  • Wu, C., Yao, W., Pan, W., Sun, G., Liu, J., & Wu, L. (2021). Secure control for cyber-physical systems under malicious attacks. IEEE Transactions on Control of Network Systems, 9, 775–788. https://doi.org/10.1109/TCNS.2021.3094782
  • Xue, S., Luo, B., & Liu, D. (2018). Event-triggered adaptive dynamic programming for zero-sum game of partially unknown continuous-time nonlinear systems. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50, 3189–3199. https://doi.org/10.1109/TSMC.6221021
  • Yang, X., Zhang, H., & Wang, Z. (2021). Data-based optimal consensus control for multiagent systems with policy gradient reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 33, 3872–3883. https://doi.org/10.1109/TNNLS.2021.3054685
  • Zhang, H., & Wang, J. (2016). Active steering actuator fault detection for an automatically-steered electric ground vehicle. IEEE Transactions on Vehicular Technology, 66, 3685–3702. https://doi.org/10.1109/TVT.2015.2445833

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.