741
Views
15
CrossRef citations to date
0
Altmetric
Articles

Model-free adaptive control design for nonlinear discrete-time processes with reinforcement learning techniques

&
Pages 2298-2308 | Received 23 Sep 2017, Accepted 28 Jun 2018, Published online: 25 Jul 2018

References

  • Bian, T., & Jiang, Z. P. (2016). Value iteration and adaptive dynamic programming for data-driven adaptive optimal control design. Automatica, 71, 348–360.
  • Chai, T. Y., Hou, Z. S., Lewis, F. L., Hussain, A., & Zhao, D. (2011). Guest editorial data-based control modeling, and optimization. IEEE Transactions on Neural Networks, 22(11), 2150–2153.
  • Chen, C. L. P., Wen, G. X., Liu, Y. J., & Wang, F. Y. (2014). Adaptive consensus control for a class of nonlinear multiagent time-delay systems using neural networks. IEEE Transactions on Neural Networks and Learning Systems, 25(6), 1217–1226.
  • Eskinat, E., & Johnson, S. (1991). Use of Hammerstein models in identification of nonlinear systems. AIChE Journal, 37(2), 255–268.
  • Guardabassi, G. O., & Savaresi, S. M. (2000). Virtual reference direct design method: An off-line approach to data-based control system design. IEEE Transactions on Automatic Control, 45(5), 954–959.
  • He, W., & Dong, Y. T. (2018). Adaptive fuzzy neural network control for a constrained robot using impedance learning. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 1174–1186.
  • Hjalmarsson, H. (2005). From experiment design to closed-loop control. Automatica, 41(3), 393–438.
  • Hou, Z. S. (1994). The parameter identification, adaptive control and model free learning adaptive control for nonlinear systems (PhD dissertation). Northeastern University, Shengyang, China.
  • Hou, Z. S., & Jin, S. T. (2011a). A novel data-driven control approach for a class of discrete-time nonlinear systems. IEEE Transactions on Control Systems Technology, 19(6), 1549–1558.
  • Hou, Z. S., & Jin, S. T. (2011b). Data driven model-free adaptive control for a class of MIMO nonlinear discrete time systems. IEEE Transactions on Neural Networks, 22(6), 2173–2188.
  • Hou, Z. S., & Jin, S. T. (2013). Model free adaptive control: Theory and applications. Boca Raton, FL: CRC Press.
  • Hou, Z. S., & Zhu, Y. M. (2013). Controller-dynamic-linearization-based model free adaptive control for discrete-time nonlinear systems. IEEE Transactions on Industrial Informatics, 9(4), 2301–2309.
  • Ku, C. C., & Lee, K. Y. (1995). Diagonal recurrent neural networks for dynamics systems control. IEEE Transactions on Neural Networks, 6(1), 144–156.
  • Kumar, A., & Sharma, R. (2017). Fuzzy Lyapunov reinforcement learning for non linear systems. ISA Transactions, 67, 151–159.
  • Lee, C. H., & Teng, C. C. (2000). Identification and control of dynamic systems using recurrent fuzzy neural network. IEEE Transactions on Fuzzy Systems, 8(4), 349–366.
  • Lewis, F. L., & Vrabie, D. (2009). Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits and Systems Magazine, 9(3), 32–50.
  • Li, Y. Q., Hou, Z. S., Feng, Y. J., & Chi, R. H. (2017). Data-driven approximate value iteration with optimality error bound analysis. Automatica, 78, 79–87.
  • Li, J. N., Modares, H., Chai, T. Y., Lewis, F. L., & Xie, L. H. (2017). Off-policy reinforcement learning for synchronization in multiagent graphical games. IEEE Transactions on Neural Networks and Learning Systems, 28(10), 2434–2445.
  • Li, Y. X., & Yang, G. H. (2018). Model-based adaptive event-triggered control of strict-feedback nonlinear systems. IEEE Transactions on Neural Networks and Learning Systems, 29(4), 1033–1045.
  • Li, Y. X., Yang, G. H., & Tong, S. C. (2018). Fuzzy adaptive distributed event-triggered consensus control of uncertain nonlinear multi-agent systems. IEEE Transactions on Systems, Man, and Cybernetics: Systems. doi:10.1109/TSMC.2018.2812216
  • Liu, Y. J., Gong, M. Z., Tong, S. C., Chen, C. L. P., & Li, D. J. (2018). Adaptive fuzzy output feedback control for a class of nonlinear systems with full state constraints. IEEE Transactions on Fuzzy Systems. doi:10.1109/TFUZZ.2018.2798577
  • Liu, Y. J., Lu, S. M., Chen, S. C., Chen, X. K., Tong, C. L. P., & Li, D. J. (2018). Adaptive control-based barrier Lyapunov functions for a class of stochastic nonlinear systems with full state constraints. Automatica, 87, 83–93.
  • Liu, Y. J., Tang, L., & Tong, S. C. (2015). Reinforcement learning design-based adaptive tracking control with less learning parameters for nonlinear discrete-time MIMO systems. IEEE Transactions on Neural Networks and Learning Systems, 26(1), 165–176.
  • Liu, Y. J., & Tong, S. C. (2017). Barrier Lyapunov functions for Nussbaum gain adaptive control of full state constrained nonlinear systems. Automatica, 76, 143–152.
  • Liu, D., & Yang, G. H. (2017a). Event-based model-free adaptive control for discrete-time nonlinear processes. IET Control Theory Applications. doi: 10.1049/iet-cta.2016.1672
  • Liu, D., & Yang, G. H. (2017b). Neural network-based event-triggered MFAC for nonlinear discrete-time processes. Neurocomputing. doi: 10.1016/j.neucom.2017.07.008
  • Mathkar, A., & Borkar, V. S. (2017). Distributed reinforcement learning via gossip. IEEE Transactions on Automatic Control, 62(3), 1465–1470.
  • Modares, H., Lewis, F. L., & Jiang, Z. P. (2015). H∞ tracking control of completely unknown continuous-time systems via off-policy reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 26(10), 2550–2562.
  • Narayanan, V., & Jagannathan, S. (2017). Event-triggered distributed control of nonlinear interconnected systems using online reinforcement learning with exploration. IEEE Transactions on Cybernetics. doi: 10.1109/TCYB.2017.2741342
  • Radac, M. B., Precup, R. E., & Roman, R. C. (2017). Model-Free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning. International Journal of Systems Science, 48(5), 1071–1083.
  • Sala, A., & Esparza, A. (2005). Extensions to virtual reference feedback tuning: A direct method for the design of feedback controllers. Automatica, 41(8), 1473–1476.
  • Schauer, T., Savaresi, S. M., Hunt, K. J., & Previdi, F. (2004). Data-driven control design for neuroprotheses: A virtual reference feedback tuning (VRFT) approach. IEEE Transactions on Control Systems Technology, 12(1), 176–182.
  • Sivag, J., Datta, A., & Bhattacharyya, S. P. (2002). New results on the synthesis of PID controllers. IEEE Transactions on Automatic Control, 47(2), 241–252.
  • Song, R. Z., Lewis, F. L., & Wei, Q. L. (2017). Off-Policy integral reinforcement learning method to solve nonlinear continuous-time multiplayer nonzero-sum games. IEEE Transactions on Neural Networks and Learning Systems, 28(3), 704–713.
  • Tanaskovic, M., Fagiano, L., Novara, C., & Morari, M. (2017). Data-driven control of nonlinear systems: An on-line direct approach. Automatica, 75, 1–10.
  • Wang, H. W., Huang, T. W., Liao, X. F., Abu-Rub, H., & Chen, G. (2017). Reinforcement learning for constrained energy trading games with incomplete information. IEEE Transactions on Cybernetics, 47(10), 3404–3416.
  • Wang, J. S., & Yang, G. H. (2016). Data-driven output-feedback fault-tolerant L2 control of unknown dynamic systems. Isa Transactions, 63, 182–195.
  • Xie, C. H., & Yang, G. H. (2016). Data-based fault-tolerant control for affine nonlinear systems with actuator faults. ISA Transactions, 64, 285–292.
  • Xu, J. X. (2011). A survey on iterative learning control for nonlinear systems. International Journal of Control, 84(7), 1275–1294.
  • Xu, D. Z., Jiang, B., & Shi, P. (2014b). Adaptive observer based data-driven control for nonlinear discrete-time processes. IEEE Transactions on Automation Science and Engineering, 11(4), 1037–1045.
  • Xu, D. Z., Jiang, B., & Shi, P. (2014). A novel model-free adaptive control design for multivariable industrial processes. IEEE Transactions on Industrial Electronics, 61(11), 6391–6398.
  • Xu, B., Yang, C., & Shi, Z. (2014). Reinforcement learning output feedback NN control using deterministic learning technique. IEEE Transactions on Neural Networks and Learning Systems, 25(3), 635–641.
  • Yamamoto, T., Takao, K., & Yamada, T. (2009). Design of a data-driven PID controller. IEEE Transactions on Control Systems Technology, 17(1), 29–39.
  • Yang, Q., & Jagannathan, S. (2012). Reinforcement learning controller design for affine nonlinear discrete-time systems using online approximators. IEEE Transactions on Systems Man Cybernetics Part B, 42(2), 377–390.
  • Zhai, D., Zhang, Q. L., & Liu, G. Y. (2014). Data-driven criteria synthesis of system with two-degree-of-freedom controller. International Journal of Systems Science, 45(11), 2275–2281.
  • Zhang, H. G., Liu, D. R., Luo, Y. H., & Wang, D. (2013). Adaptive dynamic programming for control-algorithms and stability. London: Springer-Verlag.
  • Zhang, H. G., Zhou, J. G., Sun, Q. Y., Guerrero, J. M., & Ma, D. Z. (2017). Data-driven control for interlinked AC/DC microgrids via model-free adaptive control and dual-droop control. IEEE Transactions on Smart Grid, 8(2), 557–571.
  • Zhu, Y. M., & Hou, Z. S. (2014). Data-driven MFAC for a class of discrete-time nonlinear systems with RBFNN. IEEE Transactions on Neural Networks and Learning Systems, 25(5), 1013–1020.
  • Zhu, Y. M., Hou, Z. S., Qian, F., & Du, W. L. (2017). Dual RBFNNs-based model-free adaptive control with aspen HYSYS simulation. IEEE Transactions on Neural Networks and Learning Systems, 28(3), 759–765.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.