172
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

DDPG-based controlling algorithm for upper limb prosthetic shoulder joint

, ORCID Icon, , &
Pages 1083-1093 | Received 09 Oct 2022, Accepted 03 Apr 2023, Published online: 26 Apr 2023

References

  • Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1798–1828. https://doi.org/10.1109/TPAMI.2013.50
  • Carlucho, I., Paula, M. D., & Acosta, G. G. (2020). An adaptive deep reinforcement learning approach for MIMO PID control of mobile robots. ISA Transactions, 102, 280–294. https://doi.org/10.1016/j.isatra.2020.02.017
  • Chai, J., & Hayashibe, M. (2020). Motor synergy development in high-performing deep reinforcement learning algorithms. IEEE Robotics and Automation Letters, 5(2), 1271–1278. https://doi.org/10.1109/LRA.2020.2968067
  • Chen, P., He, Z., Chen, C., & Xu, J. (2018). Control strategy of speed servo systems based on deep reinforcement learning. Algorithms, 11(5), 65. https://doi.org/10.3390/a11050065
  • Chen, Z., Guo, Q., Li, T., Yan, Y., & Jiang, D. (2022a). Gait prediction and variable admittance control for lower limb exoskeleton with measurement delay and extended-state-observer. IEEE Transactions on Neural Networks and Learning Systems, 1–14. https://doi.org/10.1109/TNNLS.2022.3152255.
  • Chen, Z., Guo, Q., Yan, Y., & Shi, Y. (2022b). Model identification and adaptive control of lower limb exoskeleton based on neighborhood field optimization. Mechatronics, 81, 102699. https://doi.org/10.1016/j.mechatronics.2021.102699
  • Cho, S., & Jo, S. (2012). Incremental online learning of robot behaviors from selected multiple kinesthetic teaching trials. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 43(3), 730–740. https://doi.org/10.1109/TSMCA.2012.2207108
  • El-Sousy, F. F., & Abuhasel, K. A. “Nonlinear robust optimal control via adaptive dynamic programming of permanent-magnet linear synchronous motor drive for uncertain two-axis motion control system.2018 IEEE industry applications society annual meeting (IAS). IEEE, 2018.
  • Fang, H., Zhu, Y., Dian, S., Xiang, G., Guo, R., & Li, S. (2022). Robust tracking control for magnetic wheeled mobile robots using adaptive dynamic programming. ISA Transactions, 128, 123–132. https://doi.org/10.1016/j.isatra.2021.10.017
  • Gheibi, A., Ghiasi, A. R., Ghaemi, S., & Badamchizadeh, M. A. (2020). Designing of robust adaptive passivity-based controller based on reinforcement learning for nonlinear port-Hamiltonian model with disturbance. International Journal of Control, 93(8), 1754–1764. https://doi.org/10.1080/00207179.2018.1532607
  • Han, X., Liu, H., Sun, F., & Zhang, X. (2019). Active object detection with multistep action prediction using deep q-network. IEEE Transactions on Industrial Informatics, 15(6), 3723–3731. https://doi.org/10.1109/TII.2019.2890849
  • Khan, A. H., Li, S., Chen, D., & Liao, L. “Tracking control of redundant mobile manipulator: An RNN based metaheuristic approach.” Neurocomputing 400 (2020): 272-284. https://doi.org/10.1016/j.neucom.2020.02.109
  • Kim, H. , Kim, I. M., Cho, C. N., & Song, J. B. “Safe joint module for safe robot arm based on passive and active compliance method.” Mechatronics 22.7 (2012): 1023-1030. https://doi.org/10.1016/j.mechatronics.2012.08.007
  • Kumar, R., & Singh, B. (2016). BLDC motor-driven solar PV array-fed water pumping system employing zeta converter. IEEE Transactions on Industry Applications, 52(3), 2315–2322. https://doi.org/10.1109/TIA.2016.2522943
  • Lee, W. B., Park, H. C., Ahn, K. H., & Song, J. B. (2018). Safe robot joint brake based on an elastic latch module. Mechatronics, 56, 67–72. https://doi.org/10.1016/j.mechatronics.2018.10.007
  • Liu, M., Zhang, J., & Shang, M. (2022). Real-time cooperative kinematic control for multiple robots in distributed scenarios with dynamic neural networks. Neurocomputing, 491, 621–632. https://doi.org/10.1016/j.neucom.2021.12.038
  • Liu, Q., Liu, Z., Xiong, B., Xu, W., & Liu, Y. (2021). Deep reinforcement learning-based safe interaction for industrial human-robot collaboration using intrinsic reward function. Advanced Engineering Informatics, 49, 101360. https://doi.org/10.1016/j.aei.2021.101360
  • Lu, J. W., Zhang, Q. J., & Zhao, H. L. (2020). Design and human-like motion research of service robot for the elderly. Chinese Journal of Engineering Design, 27(2), 269–278.
  • Pennestrì, E., Rossi, V., Salvini, P., & Valentini, P. P. (2016). Review and comparison of dry friction force models. Nonlinear Dynamics, 83(4), 1785–1801. https://doi.org/10.1007/s11071-015-2485-3
  • Precup, R. E., Teban, T. A., Albu, A., Borlea, A. B., Zamfirache, I. A., & Petriu, E. M. (2020). Evolving fuzzy models for prosthetic hand myoelectric-based control. IEEE Transactions on Instrumentation and Measurement, 69(7), 4625–4636. https://doi.org/10.1109/TIM.2020.2983531
  • Preitl, Z., Precup, R. E., Tar, J. K., & Takács, M. (2006). Use of multi-parametric quadratic programming in fuzzy control systems. Acta Polytechnica Hungarica, 3(3), 29–43.
  • Rezaei, H., & Khosrowjerdi, M. J. (2017). A polytopic LPV approach to active fault tolerant control system design for three-phase induction motors. International Journal of Control, 90(10), 2297–2315. https://doi.org/10.1080/00207179.2016.1244730
  • Rigatos, G., Busawon, K., & Abbaszadeh, M. (2022). A nonlinear optimal control approach for the truck and N-trailer robotic system. IFAC Journal of Systems and Control, 20, 100191. https://doi.org/10.1016/j.ifacsc.2022.100191
  • Saadaoui, O., Khlaief, A., Abassi, M., Chaari, A., & Boussak, M. (2017). A sliding-mode observer for high-performance sensorless control of PMSM with initial rotor position detection. International Journal of Control, 90(2), 377–392. https://doi.org/10.1080/00207179.2016.1181788
  • Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117. https://doi.org/10.1016/j.neunet.2014.09.003
  • Song, Z., Yang, J., Mei, X., Tao, T., & Xu, M. (2021). Deep reinforcement learning for permanent magnet synchronous motor speed control systems. Neural Computing and Applications, 33(10), 5409–5418. https://doi.org/10.1007/s00521-020-05352-1
  • Su, H., Enayati, N., Vantadori, L., Spinoglio, A., Ferrigno, G., & De Momi, E. (2018). Online human-like redundancy optimization for tele-operated anthropomorphic manipulators. International Journal of Advanced Robotic Systems, 15(6), 1729881418814695.
  • Su, H., Qi, W., Yang, C., Aliverti, A., Ferrigno, G., & De Momi, E. (2019). Deep neural network approach in human-like redundancy optimization for anthropomorphic manipulators. IEEE Access, 7, 124207–124216. https://doi.org/10.1109/ACCESS.2019.2937380
  • Tzafestas, S. G., 5-Mobile robot control I: The lyapunov-based method, In: Introduction to mobile robot control, Elsevier, 2014, pp. 137–183.
  • Vincitorio, F., Staffa, G., Aszmann, O. C., Fontana, M., Brånemark, R., Randi, P., Macchiavelli, T., & Cutti, A. G. (2020). Targeted muscle reinnervation and osseointegration for pain relief and prosthetic arm control in a woman with bilateral proximal upper limb amputation. World Neurosurgery, 143, 365–373. https://doi.org/10.1016/j.wneu.2020.08.047
  • Wang, M., Dong, X., Ren, X., & Chen, Q. (2021). SDRE based optimal finite-time tracking control of a multi-motor driving system. International Journal of Control, 94(9), 2551–2563. https://doi.org/10.1080/00207179.2020.1717632
  • Xie, Z., Clary, P., Dao, J., Morais, P., Hurst, J., & van de Panne, M. (2019). “Iterative reinforcement learning based design of dynamic locomotion skills for cassie.” arXiv preprint arXiv:1903.09537.
  • Yang, A., Chen, Y., Naeem, W., Fei, M., & Chen, L. (2021). Humanoid motion planning of robotic arm based on human arm action feature and reinforcement learning. Mechatronics, 78, 102630. https://doi.org/10.1016/j.mechatronics.2021.102630
  • Yang, X., Zhou, Q., Wang, J., Han, L., Zhou, R., He, Y., & Li, K. C. (2019). Predictive control modeling of ADS’s MEBT using BPNN to reduce the impact of noise on the control system. Annals of Nuclear Energy, 132, 576–583. https://doi.org/10.1016/j.anucene.2019.06.034
  • Yin, Z., Gong, L., Du, C., Liu, J., & Zhong, Y. (2019). Integrated position and speed loops under sliding-mode control optimized by differential evolution algorithm for PMSM drives. IEEE Transactions on Power Electronics, 34(9), 8994–9005. https://doi.org/10.1109/TPEL.2018.2889781
  • Zamfirache, I. A., Precup, R. E., Roman, R. C., & Petriu, E. M. “Policy iteration reinforcement learning-based control using a grey wolf optimizer algorithm.” Information Sciences 585 (2022): 162–175. https://doi.org/10.1016/j.ins.2021.11.051
  • Zhang, W., Gai, J., Zhang, Z., Tang, L., Liao, Q., & Ding, Y. (2019). Double-DQN based path smoothing and tracking control method for robotic vehicle navigation. Computers and Electronics in Agriculture, 166, 104985. https://doi.org/10.1016/j.compag.2019.104985
  • Zhen, S., Peng, X., Liu, X., Li, H., & Chen, Y. H. “A new PD based robust control method for the robot joint module.” Mechanical Systems and Signal Processing 161 (2021): 107958. https://doi.org/10.1016/j.ymssp.2021.107958
  • Zhou, H., Chen, R., Zhou, S., & Liu, Z. (2019). Design and analysis of a drive system for a series manipulator based on orthogonal-fuzzy PID control. Electronics, 8(9), 1051. https://doi.org/10.3390/electronics8091051

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.