129
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Discharge control policy based on density and speed for deep Q-learning adaptive traffic signal

ORCID Icon, & ORCID Icon
Pages 1707-1726 | Received 12 Jan 2023, Accepted 24 Jul 2023, Published online: 18 Aug 2023

References

  • Abdoos, M., N. Mozayani, and A. L. Bazzan. 2014. “Hierarchical Control of Traffic Signals Using Q-Learning with Tile Coding.” Applied Intelligence 40 (2): 201–213. https://doi.org/10.1007/s10489-013-0455-3.
  • Ahmed, M. A. A., H. L. Khoo, and O. E. Ng. 2022. “Application of Convolution Neural Network for Adaptive Traffic Controller System.” KSCE Journal of Civil Engineering, 1–11.
  • Ali, M. E. M., A. Durdu, S. A. Çeltek, and A. Yilmaz. 2021. “An Adaptive Method for Traffic Signal Control Based on Fuzzy Logic with Webster and Modified Webster Formula Using SUMO Traffic Simulator.” IEEE Access 9: 102985–102997. https://doi.org/10.1109/ACCESS.2021.3094270.
  • Bazzan, A., D. De Oliveira, and B. Da Silva. 2010. “Learning in Groups of Traffic Signals.” Engineering Applications of Artificial Intelligence 23 (4): 560–568. https://doi.org/10.1016/j.engappai.2009.11.009.
  • Brys, T., T. T. Pham, and M. E. Taylor. 2014. “Distributed Learning and Multi-Objectivity in Traffic Light Control.” Connection Science 26 (1): 65–83. https://doi.org/10.1080/09540091.2014.885282.
  • Chu, T., J. Wang, L. Codecà, and Z. Li. 2020. “Multi-agent Deep Reinforcement Learning for Large-Scale Traffic Signal Control.” IEEE Transactions on Intelligent Transportation Systems 21 (3): 1086–1095. https://doi.org/10.1109/TITS.2019.2901791.
  • Cui, Z., K. Henrickson, R. Ke, and Y. Wang. 2020. “Traffic Graph Convolutional Recurrent Neural Network: A Deep Learning Framework for Network-Scale Traffic Learning and Forecasting.” IEEE Transactions on Intelligent Transportation Systems 21 (11): 4883–4894. https://doi.org/10.1109/TITS.2019.2950416.
  • Darmoul, S., S. Elkosantini, A. Louati, and L. B. Said. 2017. “Multi-agent Immune Networks to Control Interrupted Flow at Signalized Intersections.” Transportation Research Part C: Emerging Technologies 82: 290–313. https://doi.org/10.1016/j.trc.2017.07.003.
  • Feng, Y., K. L. Head, S. Khoshmagham, and M. Zamanipour. 2015. “A Real-Time Adaptive Signal Control in a Connected Vehicle Environment.” Transportation Research Part C: Emerging Technologies 55: 460–473. https://doi.org/10.1016/j.trc.2015.01.007.
  • Gao, J., Y. Shen, J. Liu, M. Ito, and N. Shiratori. 2017. Adaptive Traffic Signal Control: Deep Reinforcement Learning Algorithm with Experience Replay and Target Network. arXiv preprint arXiv:1705.02755.
  • Gazis, D. C. 1972. “Traffic Flow and Control: Theory and Applications: The car Increases Man's Mobility, Until all Decide to Exercise This Mobility Simultaneously in Space and Time; Then We Must Call Traffic Science to the Rescue.” American Scientist 60 (4): 414–424.
  • Genders, W., and S. Razavi. 2016. Using a Deep Reinforcement Learning Agent for Traffic Signal Control. arXiv preprint arXiv:1611.01142.
  • Goodall, N. J., B. L. Smith, and B. Park. 2013. “Traffic Signal Control with Connected Vehicles.” Transportation Research Record: Journal of the Transportation Research Board 2381: 65–72. https://doi.org/10.3141/2381-08.
  • Guler, S. I., M. Menendez, and L. Meier. 2014. “Using Connected Vehicle Technology to Improve the Efficiency of Intersections.” Transportation Research Part C: Emerging Technologies 46: 121–131. https://doi.org/10.1016/j.trc.2014.05.008.
  • Islam, S. M. A. B., and A. Hajbabaie. 2017. “Distributed Coordinated Signal Timing Optimization in Connected Transportation Networks.” Transportation Research Part C: Emerging Technologies 80: 272–285. https://doi.org/10.1016/j.trc.2017.04.017.
  • Jamshidnejad, A., G. Gomes, A. M. Bayen, and B. De Schutter. 2019. Integrated Offline and Online Optimization-Based Control in a Base-Parallel Architecture. arXiv preprint arXiv:1907.05464.
  • Kari, D., G. Wu, and M. J. Barth. 2014. “Development of an Agent-Based Online Adaptive Signal Control Strategy Using Connected Vehicle Technology.” Proceedings of the IEEE International Conference on Intelligent Transportation Systems, Qingdao, People’s Republic of China, 8–11 October 2014.
  • Khamis, M. A., and W. Gomaa. 2014. “Adaptive Multi-Objective Reinforcement Learning with Hybrid Exploration for Traffic Signal Control Based on Cooperative Multi-Agent Framework.” Engineering Applications of Artificial Intelligence 29: 134–151. https://doi.org/10.1016/j.engappai.2014.01.007.
  • Khoo, H. L. 2011. “Dynamic Penalty Function Approach for Ramp Metering with Equity Constraints.” Journal of King Saud University - Science 23 (3): 273–279. https://doi.org/10.1016/j.jksus.2010.12.004.
  • Khoo, H. L., and C. Y. Tang. 2016. “Roundabout System Capacity Estimation and Control Strategy with Origin-Destination Pattern.” Journal of Transportation Engineering 142 (5): 04016017. https://doi.org/10.1061/(ASCE)TE.1943-5436.0000838.
  • Krishna, K. H., K. V. Kumar, and C. H. Rao. 2018. “Signal Design Using Webster’s Method (4 Legged Intersection).” Indian Journal of Scientific Research 17 (2): 113119.
  • Laval, J., M. Cassidy, and C. Daganzo. 2007. “Impacts of Lane Changes at Merge Bottlenecks: A Theory and Strategies to Maximize Capacity.” In Traffic and Granular Flow’05, 577–586. Berlin, Heidelberg: Springer.
  • Lee, W. H., and C. Y. Chiu. 2020. “Design and Implementation of a Smart Traffic Signal Control System for Smart City Applications.” Sensors 20 (2): 508. https://doi.org/10.3390/s20020508.
  • Li, L., Y. Lv, and F. Y. Wang. 2016. “Traffic Signal Timing via Deep Reinforcement Learning.” IEEE/CAA Journal of Automatica Sinica 3 (3): 247–254. https://doi.org/10.1109/JAS.2016.7508798.
  • Liang, X., X. Du, G. Wang, and Z. Han. 2019. “A Deep Reinforcement Learning Network for Traffic Light Cycle Control.” IEEE Transactions on Vehicular Technology 68 (2): 1243–1253. https://doi.org/10.1109/TVT.2018.2890726.
  • Liang, E., Z. Su, C. Fang, and R. Zhong. 2022. “OAM: An Option-Action Reinforcement Learning Framework for Universal Multi-Intersection Control.” Proceedings of the AAAI Conference on Artificial Intelligence 36 (4): 4550–4558. https://doi.org/10.1609/aaai.v36i4.20378.
  • Manual, H. C. 2000. Highway Capacity Manual. Washington, DC, 2(1).
  • McKenney, D., and T. White. 2013. “Distributed and Adaptive Traffic Signal Control Within a Realistic Traffic Simulation.” Engineering Applications of Artificial Intelligence 26 (1): 574–583. https://doi.org/10.1016/j.engappai.2012.04.008.
  • Oertel, R., and P. Wagner. 2011, January. “Delay-time Actuated Traffic Signal Control for an Isolated Intersection.” Proceedings 90th Annual Meeting Transportation Research Board (TRB).
  • Oroojlooy, Afshin, Mohammadreza Nazari, Davood Hajinezhad, and Jorge Silva. 2020. “Attendlight: Universal Attention-Based Reinforcement Learning Model for Traffic Signal Control.” Advances in Neural Information Processing Systems 33: 4079–4090.
  • Płaczek, B. 2014. “A Self-Organizing System for Urban Traffic Control Based on Predictive Interval Microscopic Model.” Engineering Applications of Artificial Intelligence 34: 75–84. https://doi.org/10.1016/j.engappai.2014.05.004.
  • Prashanth, L. A., and S. Bhatnagar. 2010. “Reinforcement Learning with Function Approximation for Traffic Signal Control.” IEEE Transactions on Intelligent Transportation Systems 12 (2): 412–421.
  • Rasheed, F., K. L. A. Yau, and Y. C. Low. 2020. “Deep Reinforcement Learning for Traffic Signal Control Under Disturbances: A Case Study on Sunway City, Malaysia.” Future Generation Computer Systems 109: 431–445. https://doi.org/10.1016/j.future.2020.03.065.
  • Tiaprasert, K., Y. Zhang, X. B. Wang, and X. Zeng. 2015. “Queue Length Estimation Using Connected Vehicle Technology for Adaptive Signal Control.” IEEE Transactions on Intelligent Transportation Systems 16: 2129–2140. https://doi.org/10.1109/TITS.2015.2401007.
  • Varaiya, P. 2013. “Max Pressure Control of a Network of Signalized Intersections.” Transportation Research Part C: Emerging Technologies 36: 177–195. https://doi.org/10.1016/j.trc.2013.08.014.
  • Vidali, A., L. Crociani, G. Vizzari, and S. Bandini. 2019, June. “A Deep Reinforcement Learning Approach to Adaptive Traffic Lights Management.” WOA, 42–50.
  • Wang, S., X. Xie, K. Huang, J. Zeng, and Z. Cai. 2019. “Deep Reinforcement Learning-Based Traffic Signal Control Using High-Resolution Event-Based Data.” Entropy 21 (8): 744. https://doi.org/10.3390/e21080744.
  • Wang, Y., X. Yang, H. Liang, and Y. Liu. 2018. “A Review of the Self-Adaptive Traffic Signal Control System Based on Future Traffic Environment.” Journal of Advanced Transportation.
  • Wei, H., C. Chen, G. Zheng, K. Wu, V. Gayah, K. Xu, and Z. Li. 2019, July. “Presslight: Learning Max Pressure Control to Coordinate Traffic Signals in Arterial Network.” Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1290–1298.
  • Wei, H., G. Zheng, H. Yao, and Z. Li. 2018, July. “Intellilight: A Reinforcement Learning Approach for Intelligent Traffic Light Control.” Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2496–2505.
  • Wu, J., D. Ghosal, M. Zhang, and C. N. Chuah. 2018. “Delay-based Traffic Signal Control for Throughput Optimality and Fairness at an Isolated Intersection.” IEEE Transactions on Vehicular Technology 67 (2): 896–909. https://doi.org/10.1109/TVT.2017.2760820.
  • Wunderlich, R. J. 2007. A Longest-Queue-First Signal Scheduling Algorithm with Quality of Service Provisioning for an Isolated Intersection.
  • Yen, C. C., D. Ghosal, M. Zhang, and C. N. Chuah. 2020, September. “A Deep On-Policy Learning Agent for Traffic Signal Control of Multiple Intersections.” 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), 1–6. IEEE.
  • Zakariya, A. Y., and S. I. Rabia. 2016. “Estimating the Minimum Delay Optimal Cycle Length Based on a Time-Dependent Delay Formula.” Alexandria Engineering Journal 55 (3): 2509–2514. https://doi.org/10.1016/j.aej.2016.07.029.
  • Zeng, J., J. Hu, and Y. Zhang. 2019, October. “Training Reinforcement Learning Agent for Traffic Signal Control Under Different Traffic Conditions.” 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 4248–4254. IEEE.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.