2,837
Views
1
CrossRef citations to date
0
Altmetric
Research Article

A Method to Plan the Path of a Robot Utilizing Deep Reinforcement Learning and Multi-Sensory Information Fusion

Article: 2224996 | Received 02 May 2023, Accepted 09 Jun 2023, Published online: 24 Jun 2023

References

  • Azzabi, A., and K. Nouri. 2019. An advanced potential field method proposed for mobile robot path planning [J]. Transactions of the Institute of Measurement and Control 41 (11):3132–2286. doi:10.1177/0142331218824393.
  • Bakdi, A., A. Hentout, H. Boutami, A. Maoudj, O. Hachour, and B. Bouzouia. 2017. Optimal path planning and execution for mobile robots using genetic algorithm and adaptive fuzzy-logic control [J]. Robotics and Autonomous Systems 89:95–109. doi:10.1016/j.robot.2016.12.008.
  • Chen, Y., G. Luo, Y. Mei, J.-Q. Yu, and X.-L. Su. 2016. UAV path planning using artificial potential field method updated by optimal control theory [J]. International Journal of Systems Science 47 (6):1407–20. doi:10.1080/00207721.2014.929191.
  • Gao, J., W. Ye, J. Guo, and Z. Li. 2020. Deep reinforcement learning for indoor mobile robot path planning [J]. Sensors 20 (19):5493. doi:10.3390/s20195493.
  • Ge, S., and Y. J. Cui. 2000. New potential functions for mobile robot path planning [J]. IEEE Transactions on Robotics and Automation 16 (5):615–20. doi:10.1109/70.880813.
  • He, W., Z. Li, and C. L. P. Chen. 2017. A survey of human-centered intelligent robots: Issues and challenges[J. IEEE/CAA Journal of Automatica Sinica 4 (4):602–09. doi:10.1109/JAS.2017.7510604.
  • He, C., H. Z, Y. L, and B. Zeng. 2017. Obstacle avoidance path planning for robot arm based on a mixed algorithm of artificial potential field method and RRT [J]. Industrial Engineering Journal 20 (2):56.
  • Jiang, J., X. Zeng, D. Guzzetti, and Y. You. 2020. Path planning for asteroid hopping rovers with pre-trained deep reinforcement learning architectures [J]. Acta Astronautica 171:265–79. doi:10.1016/j.actaastro.2020.03.007.
  • Kim, S., M. Han, D. K, and J. H. Park. 2020. Motion planning of robot manipulators for a smoother path using a twin delayed deep deterministic policy gradient with hindsight experience replay [J]. Applied Sciences 10 (2):575. doi:10.3390/app10020575.
  • Kovács, B., G. Szayer, F. Tajti, M. Burdelis, and P. Korondi. 2016. A novel potential field method for path planning of mobile robots by adapting animal motion attributes [J]. Robotics and Autonomous Systems 82:24–34. doi:10.1016/j.robot.2016.04.007.
  • Li, A., J. Cao, S. Li, Z. Huang, J. Wang, and G. Liu. 2022. Map construction and path planning method for a mobile robot based on multi-sensor information fusion[J]. Applied Sciences 12 (6):2913. doi:10.3390/app12062913.
  • Lin, X., J. Wu, A. K. Bashir, J. Li, W. Yang, and M. J. Piran. 2022. Blockchain-based incentive energy-knowledge trading in IoT: Joint power transfer and AI design [J]. IEEE Internet of Things Journal 9 (16): 14685–14698. doi:10.1109/JIOT.2020.3024246.
  • Li, Q., Y. Xu, S. Bu, and J. Yang. 2022. Smart vehicle path planning based on modified PRM algorithm [J]. Sensors 22 (17):6581. doi:10.3390/s22176581.
  • Nie, J., J. Yan, H. Yin, L. Ren, and Q. Meng. 2020. A multimodality fusion deep neural network and safety test strategy for intelligent vehicles [J]. IEEE Transactions on Intelligent Vehicles 6 (2):310–22. doi:10.1109/TIV.2020.3027319.
  • Noreen, I., A. Khan, and Z. Habib. 2016a. A comparison of RRT, RRT*, and RRT*-Smart path planning algorithms [J]. International Journal of Computer Science and Network Security (IJCSNS 16 (10):20.
  • Noreen, I., A. Khan, and Z. Habib. 2016b. Optimal path planning using RRT* based approaches: A survey and future directions [J]. International Journal of Advanced Computer Science & Applications 7 (11). doi:10.14569/IJACSA.2016.071114.
  • Raja, G., A. Ganapathisubramaniyan, S. Anbalagan, S. B. M. Baskaran, K. Raja, and A. K. Bashir. 2020. Intelligent reward-based data offloading in next-generation vehicular networks[J. IEEE Internet of Things Journal 7 (5):3747–58. doi:10.1109/JIOT.2020.2974631.
  • Song, Q., and L. Liu. 2012. Mobile robot path planning based on dynamic fuzzy artificial potential field method [J]. International Journal of Hybrid Information Technology 5 (4):85–94.
  • Wang, J., W. Chi, C. Li, C. Wang, and M. Q.-H. Meng. 2020. Neural RRT*: Learning-based optimal path planning [J]. IEEE Transactions on Automation Science and Engineering 17 (4):1748–58. doi:10.1109/TASE.2020.2976560.
  • Wang, M., T. Tao, and H. Liu. 2018. Current researches and future development trend of an intelligent robot: A review [J]. International Journal of Automation & Computing 15 (5):525–46. doi:10.1007/s11633-018-1115-1.
  • Wang, N., H. Xu, C. Li, and J. Yin. 2021. Hierarchical path planning of unmanned surface vehicles: A fuzzy artificial potential field approach [J]. International Journal of Fuzzy Systems 23 (6):1797–808. doi:10.1007/s40815-020-00912-y.
  • Wang, J., T. Zhang, N. Ma, Z. Li, H. Ma, F. Meng, and M. Q.-H. Meng. 2021. A survey of learning‐based robot motion planning [J]. IET Cyber‐Systems and Robotics 3 (4):302–14. doi:10.1049/csy2.12020.
  • Xu, R. 2019. Path planning of mobile robot based on multi-sensor information fusion [J]. EURASIP Journal on Wireless Communications and Networking 2019. 1(1):1–8. doi:10.1186/s13638-019-1352-1.
  • Yang, Y., J. T. Li, and L. L. Peng. 2020. Multi‐robot path planning based on a deep reinforcement learning DQN algorithm [J]. CAAI Transactions on Intelligence Technology 5 (3):177–83. doi:10.1049/trit.2020.0024.
  • Yun, J., J. H. Won, D. K, and E. S. Jeong. 2016. The relationship between technology, business model, and market in autonomous car and intelligent robot industries [J]. Technological Forecasting & Social Change 103:142–55. doi:10.1016/j.techfore.2015.11.016.
  • Yu, J., Y. Su, and Y. Liao. 2020. The path planning of mobile robots by neural networks and hierarchical reinforcement learning [J]. Frontiers in Neurorobotics 14:63. doi:10.3389/fnbot.2020.00063.
  • Zhang, X., X. Shi, Z. Zhang, Z. Wang, and L. Zhang. 2022. A DDQN path planning algorithm based on experience classification and multi steps for mobile robots [J]. Electronics 11 (14):2120. doi:10.3390/electronics11142120.
  • Zhao, M., H. Lu, S. Yang, and F. Guo. 2020. The experience-memory Q-Learning algorithm for robot path planning in the unknown environment [J]. IEEE Access 8:47824–44. doi:10.1109/ACCESS.2020.2978077.