70
Views
0
CrossRef citations to date
0
Altmetric
Research Article

A reinforcement learning based autonomous vehicle control in diverse daytime and weather scenarios

, , &
Received 16 Sep 2023, Accepted 15 Jun 2024, Published online: 26 Jun 2024

References

  • Al-Sharman, M., Murdoch, D., Cao, D., Lv, C., Zweiri, Y., Rayside, D., & Melek, W. (2021). A sensorless state estimation for a safety-oriented cyber-physical system in urban driving: Deep learning approach. IEEE/CAA Journal of Automatica Sinica, 8(1), 169–178. https://doi.org/10.1109/JAS.2020.1003474
  • Aradi, S. (2020). Survey of deep reinforcement learning for motion planning of autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems, 23(2), 740–759.
  • Arnold, E., Al-Jarrah, O. Y., Dianati, M., Fallah, S., Oxtoby, D., & Mouzakitis, A. (2019). A survey on 3d object detection methods for autonomous driving applications. IEEE Transactions on Intelligent Transportation Systems, 20(10), 3782–3795. https://doi.org/10.1109/TITS.2019.2892405
  • Be (2021). Road traffic injuries. Retrieved from https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries.
  • Cai, J., Jiang, H., Wang, J., & Li, A. (2024). Multi-head attention-based intelligent vehicle lane change decision and trajectory prediction model in highways. Journal of Intelligent Transportation Systems. Advance online publication. https://doi.org/10.1080/15472450.2024.2341392
  • Cao, Z., Xu, S., Jiao, X., Peng, H., & Yang, D. (2022). Trustworthy safety improvement for autonomous driving using reinforcement learning. Transportation Research Part C: Emerging Technologies, 138, 103656. https://doi.org/10.1016/j.trc.2022.103656
  • Cao, Z., Yang, D., Xu, S., Peng, H., Li, B., Feng, S., & Zhao, D. (2021). Highway exiting planner for automated vehicles using reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 22(2), 990–1000. https://doi.org/10.1109/TITS.2019.2961739
  • Chen, L., Hu, X., Tang, B., & Cheng, Y. (2020). Conditional dqn-based motion planning with fuzzy logic for autonomous driving. IEEE Transactions on Intelligent Transportation Systems.
  • Chen, H., Zhang, Y., Bhatti, U. A., & Huang, M. (2023). Safe decision controller for autonomous drivingbased on deep reinforcement learning innondeterministic environment. Sensors, 23(3), 1198. https://doi.org/10.3390/s23031198
  • Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., & Koltun, V. (2017). Carla: An open urban driving simulator. In Conference on robot learning (pp. 1–16). PMLR.
  • Dupuis, M., Strobl, M., Grezlikowski, H. (2010). Opendrive 2010 and beyond–status and future of the de facto standard for the description of road networks. In Proceedings of the Driving Simulation Conference Europe (pp. 231–242).
  • El Hamdani, S., & Benamar, N. (2019). Dbda: Distant bicycle detection and avoidance protocol based on v2v communication for autonomous vehicle-bicycle road share [Paper presentation]. In 2019 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS) (pp. 1–6). IEEE. https://doi.org/10.1109/WITS.2019.8723866
  • El Hamdani, S., Benamar, N., & Younis, M. (2020). A protocol for pedestrian crossing and increased vehicular flow in smart cities. Journal of Intelligent Transportation Systems, 24(5), 514–533. https://doi.org/10.1080/15472450.2019.1683451
  • Elallid, B. B., Bagaa, M., Benamar, N., & Mrani, N. (2023). A reinforcement learning based approach for controlling autonomous vehicles in complex scenarios [Paper presentation]. In 2023 International Wireless Communications and Mobile Computing (IWCMC) (pp. 1358–1364). IEEE. https://doi.org/10.1109/IWCMC58020.2023.10182377
  • Elallid, B. B., Benamar, N., Hafid, A. S., Rachidi, T., & Mrani, N. (2022a). A comprehensive survey on the application of deep and reinforcement learning approaches in autonomous driving. Journal of King Saud University-Computer and Information Sciences, 34(9), 7366–7390.
  • Elallid, B. B., Benamar, N., Mrani, N., & Rachidi, T. (2022b). Dqn-based reinforcement learning for vehicle control of autonomous vehicles interacting with pedestrians [Paper presentation]. In 2022 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT) (pp. 489–493). IEEE. https://doi.org/10.1109/3ICT56508.2022.9990801
  • Elallid, B. B., Hamdani, S. E., Benamar, N., & Mrani, N. (2022c). Deep learning-based modeling of pedestrian perception and decision-making in refuge island for autonomous driving. In M. Ouaissa, Z. Boulouard, M. Ouaissa, B. Guermah (Eds.), Computational intelligence in recent communication networks (pp. 135–146). Springer.
  • Fu, Q., Tian, Y., & Sun, J. (2022). Modeling and simulation of dynamic lane reversal using a cell transmission model. Journal of Intelligent Transportation Systems, 26(6), 717–729. https://doi.org/10.1080/15472450.2021.1973898
  • Khattak, Z. H., Rios-Torres, J., Fontaine, M. D., & Khattak, A. J. (2023). Inferring safety critical events from vehicle kinematics in naturalistic driving environment: Application of deep learning algorithms. Journal of Intelligent Transportation Systems, 27(4), 423–440. https://doi.org/10.1080/15472450.2022.2048655
  • Kim, Y.,Kang, K.,Park, N.,Park, J., &Oh, C. (2024). Reinforcement learning approach to develop variable speed limit strategy using vehicle data and simulations. Journal of Intelligent Transportation Systems. Advance online publication. https://doi.org/10.1080/15472450.2024.2312808
  • Lamssaggad, A., Benamar, N., Hafid, A. S., & Msahli, M. (2021). A survey on the current security landscape of intelligent transportation systems. IEEE Access, 9, 9180–9208. https://doi.org/10.1109/ACCESS.2021.3050038
  • Lee, M-j., & Ha, Y-g (2020). Autonomous driving control using end-to-end deep learning [Paper presentation]. In 2020 IEEE International Conference on Big Data and Smart Computing (BigComp) (pp. 470–473). IEEE. https://doi.org/10.1109/BigComp48618.2020.00-23
  • Li, Z., Yuan, S., Yin, X., Li, X., & Tang, S. (2023). Research into autonomous vehicles following and obstacle avoidance based on deep reinforcement learning method under map constraints. Sensors, 23(2), 844. https://doi.org/10.3390/s23020844
  • Liu, H., Huang, Z., Wu, J., & Lv, C. (2022). Improved deep reinforcement learning with expert demonstrations for urban autonomous driving [Paper presentation]. In 2022 IEEE Intelligent Vehicles Symposium (IV) (pp. 921–928). IEEE. https://doi.org/10.1109/IV51971.2022.9827073
  • Mújica-Vargas, D., Luna-Álvarez, A., de Jesús Rubio, J., & Carvajal-Gámez, B. (2020). Noise gradient strategy for an enhanced hybrid convolutional-recurrent deep network to control a self-driving vehicle. Applied Soft Computing, 92, 106258. https://doi.org/10.1016/j.asoc.2020.106258
  • Pérez-Gil, Ó., Barea, R., López-Guillén, E., Bergasa, L. M., Gómez-Huélamo, C., Gutiérrez, R., & Díaz-Díaz, A. (2022). Deep reinforcement learning based control for autonomous vehicles in carla. Multimedia Tools and Applications, 81(3), 3553–3576. https://doi.org/10.1007/s11042-021-11437-3
  • Sanders, A. (2016). An introduction to unreal engine 4. AK Peters/CRC Press.
  • Spatharis, C., & Blekas, K. (2022). Multiagent reinforcement learning for autonomous driving in traffic zones with unsignalized intersections. Journal of Intelligent Transportation Systems, 28pages(1), 103–119. https://doi.org/10.1080/15472450.2022.2109416
  • Wu, J., Huang, Z., Hu, Z., & Lv, C. (2023). Toward human-in-the-loop AI: Enhancing deep reinforcement learning via real-time human guidance for autonomous driving. Engineering, 21, 75–91. https://doi.org/10.1016/j.eng.2022.05.017
  • Ye, F., Cheng, X., Wang, P., Chan, C.-Y., & Zhang, J. (2020). Automated lane change strategy using proximal policy optimization-based deep reinforcement learning [Paper presentation]. In 2020 IEEE Intelligent Vehicles Symposium (IV) (pp. 1746–1752). IEEE. https://doi.org/10.1109/IV47402.2020.9304668

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.